diff --git a/CHANGELOG.md b/CHANGELOG.md index d796aff06..ecbff49b5 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,138 @@ Redisson Releases History ================================ ####Please Note: trunk is current development branch. +Try __ULTRA-FAST__ [Redisson PRO](https://redisson.pro) edition. + +####19-Feb-2017 - versions 2.8.0 and 3.3.0 released + +Feature - __`RClusteredLocalCachedMap` object added__ More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections#713-map-data-partitioning) +Feature - __`RClusteredMapCache` object added__ More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections#713-map-data-partitioning) +Feature - __`RClusteredSetCache` object added__ More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections/#732-set-data-partitioning) +Feature - __`RPriorityQueue` object added__ More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections/#715-priority-queue) +Feature - __`RPriorityDeque` object added__ More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections/#716-priority-deque) +Feature - `removeAllListeners` and `removeListener` by instance methods added for `RTopic` and `RPatternTopic` +Feature - `RLockAsync` interface added +Improvement - `RRemoteService` is now able to support method overload +Fixed - `RLocalCachedMap` is not Redis cluster compatible +Fixed - cascade slaves are not supported in cluster mode +Fixed - shutdown checking during master change state check added +Fixed - master isn't checked during new slave discovery in Sentinel mode + +####02-Feb-2017 - versions 2.7.4 and 3.2.4 released + +Feature - Allow to specify Redisson instance/config during JCache cache creation +Fixed - `ByteBuf.release` method invocation is missed in `LZ4Codec` and `SnappyCodec` +Fixed - AssertionError during Redisson shutdown +Fixed - `RReadWriteLock.readLock` couldn't be acquired by same thread which has already acquired `writeLock` +Fixed - failed `RFairLock.tryLock` attempt retains caller thread in fairLock queue +Fixed - `factory already defined` error +Fixed - `JCache` expiration listener doesn't work +Fixed - `RLocalCachedMap` doesn't work with `SerializationCodec` +Fixed - `Can't find entry` error during operation execution on slave nodes + +####19-Jan-2017 - versions 2.7.3 and 3.2.3 released + +Redisson Team is pleased to announce __ULTRA-FAST__ Redisson PRO edition. +Performance measure results available in [Benchmark whitepaper](https://redisson.pro/Redisson%20PRO%20benchmark%20whitepaper.pdf) + +Feature - `RMap.getLock(key)` and `RMultimap.getLock(key)` methods added +Improvement - `RedissonSpringCacheManager` constructor with Redisson instance only added +Improvement - `CronSchedule` moved to `org.redisson.api` package +Fixed - RedissonBaseIterator.hasNext() doesn't return false in some cases +Fixed - NoSuchFieldError exception in `redisson-tomcat` modules +Fixed - ConnectionPool size not respected during redirect of cluster request +Fixed - `RSortedSet.removeAsync` and `RSortedSet.addAsync` +Fixed - `RBloomFilter.tryInit` were not validated properly +Fixed - CommandDecoder should print all replay body on error + +####19-Dec-2016 - versions 2.7.2 and 3.2.2 released + +Feature - `RList`, `RSet` and `RScoredSortedSet` implements `RSortable` interface with SORT command support +Feature - `NodeAsync` interface +Feature - `Node.info`, `Node.getNode` methods added +Fixed - elements distribution of `RBlockingFairQueue` across consumers +Fixed - `factory already defined` error during Redisson initialization under Apache Tomcat + +####14-Dec-2016 - versions 2.7.1 and 3.2.1 released + +Url format used in config files __has changed__. For example: + +"//127.0.0.1:6739" now should be written as "redis://127.0.0.1:6739" + +Feature - `RSet.removeRandom` allows to remove several members at once +Fixed - exceptions during shutdown +Fixed - redis url couldn't contain underscore in host name +Fixed - IndexOutOfBoundsException during response decoding +Fixed - command timeout didn't respect during topic subscription +Fixed - possible PublishSubscribe race-condition +Fixed - blocking queue/deque poll method blocks infinitely if delay less than 1 second + +####26-Nov-2016 - versions 2.7.0 and 3.2.0 released + +Feature - __Spring Session implementation__. More details [here](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks/#145-spring-session) +Feature - __Tomcat Session Manager implementation__. More details [here](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks/#144-tomcat-redis-session-manager) +Feature - __RDelayedQueue object added__. More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections/#714-delayed-queue) +Feature - __RBlockingFairQueue object added__. More details [here](https://github.com/redisson/redisson/wiki/7.-distributed-collections/#713-blocking-fair-queue) +Feature - `RSortedSet.readAll` and `RQueue.readAll` methods added +Fixed - `RMap.getAll` doesn't not preserve the order of elements +Fixed - Wrong nodes parsing in result of cluster info command +Fixed - NullPointerException in CommandDecoder.handleResult +Fixed - Redisson shutdown status should be checked during async command invocation + +####07-Nov-2016 - versions 2.6.0 and 3.1.0 released + +Feature - __new object added__ `RBinaryStream`. More info about it [here](https://github.com/redisson/redisson/wiki/6.-distributed-objects/#62-binary-stream-holder) +Improvement - limit Payload String on RedisTimeoutException +Improvement - Elasticache master node change detection process optimization + +####27-Oct-2016 - versions 2.5.1 and 3.0.1 released + +Include all code changes from __2.2.27__ version + +Fixed - RMapCache.fastPutIfAbsentAsync doesn't take in account expiration +Fixed - timer field of RedisClient hasn't been initialized properly in some cases + +####27-Oct-2016 - version 2.2.27 released + +This version fixes old and annonying problem with `ConnectionPool exhusted` error. From this moment connection pool waits for free connection instead of throwing pool exhausted error. This leads to more effective Redis connection utilization. + +Improvement - remove `Connection pool exhausted` exception + +####17-Oct-2016 - version 3.0.0 released +Fully compatible with JDK 8. Includes all code changes from __2.5.0__ version + +Feature - `RFeature` extends `CompletionStage` + +####17-Oct-2016 - version 2.5.0 released +This version brings greatly improved version of `RLiveObjectService` and adds cascade handling, cyclic dependency resolving, simplified object creation. Read more in this [article](https://dzone.com/articles/java-distributed-in-memory-data-model-powered-by-r) + +Includes all code changes from __2.2.26__ version + +Feautre - COUNT and ASC/DESC support for `RGeo` radius methods +Feature - `RGeo` extends `RScoredSortedSet` +Feature - `RCascade` annotation support LiveObjectService +Improvement - `RId` generator should be empty by default +Improvement - support setter/getter with protected visibility scope for LiveObject +Fixed - `RMapCache` doesn't keep entries insertion order during iteration +Fixed - `@RId` is returned/overwritten by similarly named methods (thanks to Rui Gu) +Fixed - typo `getRemoteSerivce` -> `getRemoteService` (thanks to Slava Rosin) +Fixed - `RPermitExpirableSemaphore.availablePermits` doesn't return actual permits account under certain conditions +Fixed - `readAllValues` and `readAllEntrySet` methods of `RLocalCacheMap` return wrong values +Fixed - setter for collection field of LiveObject entity should rewrite collection content +Fixed - `RSetCache` TTL not updated if element already present +Fixed - `RLiveObjectService` swallow exceptions during `merge` or `persist` operation +Fixed - `RLiveObjectService` doesn't support protected constructors +Fixed - object with cyclic dependencies lead to stackoverflow during `RLiveObjectService.detach` process +Fixed - not persisted `REntity` object allowed to store automatically +Fixed - `RLexSortedSet.addAll` doesn't work +Fixed - `RLiveObjectService` can't detach content of List object +Fixed - `RLiveObjectService` doesn't create objects mapped to Redisson objects in runtime during getter accesss +Fixed - `RLiveObjectService` can't recognize id field of object without setter + +####17-Oct-2016 - version 2.2.26 released +Fixed - NPE in CommandDecoder +Fixed - PubSub connection re-subscription doesn't work in case when there is only one slave available + ####27-Sep-2016 - version 2.4.0 released Includes all code changes from __2.2.25__ version diff --git a/README.md b/README.md index b51445c88..fb9eede53 100644 --- a/README.md +++ b/README.md @@ -1,87 +1,113 @@ Redis based In-Memory Data Grid for Java. Redisson. ==== -[![Maven Central](https://img.shields.io/maven-central/v/org.redisson/redisson.svg?style=flat-square)](https://maven-badges.herokuapp.com/maven-central/org.redisson/redisson/) - Based on high-performance async and lock-free Java Redis client and [Netty](http://netty.io) framework. -Redis 2.8+ and JDK 1.6+ compatible. +Redis 2.8+ compatible. + +| Stable Release Version | JDK Version compatibility | Release Date | +| ------------- | ------------- | ------------| +| 3.3.0 | 1.8+ | 19.02.2017 | +| 2.8.0 | 1.6, 1.7, 1.8 and Android | 19.02.2017 | -Please read [documentation](https://github.com/mrniko/redisson/wiki) for more details. -Redisson [releases history](https://github.com/mrniko/redisson/blob/master/CHANGELOG.md). +__NOTE__: Both version lines have same features except `CompletionStage` interface supported by 3.x.x line +Please read [documentation](https://github.com/redisson/redisson/wiki) for more details. +Redisson [releases history](https://github.com/redisson/redisson/blob/master/CHANGELOG.md) +Checkout more [code examples](https://github.com/redisson/redisson-examples) +Browse [javadocs](http://www.javadoc.io/doc/org.redisson/redisson/3.2.4) Licensed under the Apache License 2.0. -Welcome to support chat - [![Join the chat at https://gitter.im/mrniko/redisson](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/mrniko/redisson?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +Welcome to support chat [![Join the chat at https://gitter.im/mrniko/redisson](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/mrniko/redisson?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) Features ================================ -* [AWS ElastiCache](https://aws.amazon.com/elasticache/) servers mode: - 1. automatic new master server discovery - 2. automatic new slave servers discovery -* Cluster servers mode: +* Replicated servers mode (also supports [AWS ElastiCache](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Replication.html) and [Azure Redis Cache](https://azure.microsoft.com/en-us/services/cache/)): + 1. automatic master server change discovery +* Cluster servers mode (also supports [AWS ElastiCache Cluster](http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Clusters.html) and [Azure Redis Cache](https://azure.microsoft.com/en-us/services/cache/)): 1. automatic master and slave servers discovery - 2. automatic new master server discovery - 3. automatic new slave servers discovery - 4. automatic slave servers offline/online discovery - 5. automatic slots change discovery + 2. automatic status and topology update + 3. automatic slots change discovery * Sentinel servers mode: - 1. automatic master and slave servers discovery - 2. automatic new master server discovery - 3. automatic new slave servers discovery - 4. automatic slave servers offline/online discovery - 5. automatic sentinel servers discovery + 1. automatic master, slave and sentinel servers discovery + 2. automatic status and topology update * Master with Slave servers mode * Single server mode * Asynchronous interface for each object * Asynchronous connection pool * Thread-safe implementation * Lua scripting -* [Distributed objects](https://github.com/mrniko/redisson/wiki/6.-Distributed-objects) -* [Distributed collections](https://github.com/mrniko/redisson/wiki/7.-Distributed-collections) -* [Distributed locks and synchronizers](https://github.com/mrniko/redisson/wiki/8.-Distributed-locks-and-synchronizers) -* [Distributed services](https://github.com/mrniko/redisson/wiki/9.-distributed-services) -* [Spring cache](https://github.com/mrniko/redisson/wiki/14.-Integration%20with%20frameworks/#141-spring-cache) integration -* [Hibernate](https://github.com/mrniko/redisson/wiki/14.-Integration%20with%20frameworks/#142-hibernate) integration -* [Reactive Streams](https://github.com/mrniko/redisson/wiki/3.-operations-execution#32-reactive-way) -* [Redis pipelining](https://github.com/mrniko/redisson/wiki/10.-additional-features#102-execution-batches-of-commands) (command batches) +* [Distributed objects](https://github.com/redisson/redisson/wiki/6.-Distributed-objects) + Object holder, Binary stream holder, Geospatial holder, BitSet, AtomicLong, AtomicDouble, PublishSubscribe, + Bloom filter, HyperLogLog +* [Distributed collections](https://github.com/redisson/redisson/wiki/7.-Distributed-collections) + Map, Multimap, Set, List, SortedSet, ScoredSortedSet, LexSortedSet, Queue, Deque, Blocking Queue, Bounded Blocking Queue, Blocking Deque, Delayed Queue +* [Distributed locks and synchronizers](https://github.com/redisson/redisson/wiki/8.-Distributed-locks-and-synchronizers) + Lock, FairLock, MultiLock, RedLock, ReadWriteLock, Semaphore, PermitExpirableSemaphore, CountDownLatch +* [Distributed services](https://github.com/redisson/redisson/wiki/9.-distributed-services) + Remote service, Live Object service, Executor service, Scheduler service +* [Spring Cache](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks/#141-spring-cache) implementation +* [Hibernate Cache](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks/#142-hibernate-cache) implementation +* [JCache API (JSR-107)](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks/#143-jcache-api-jsr-107-implementation) implementation +* [Tomcat Session Manager](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks#144-tomcat-redis-session-manager) implementation +* [Spring Session](https://github.com/redisson/redisson/wiki/14.-Integration%20with%20frameworks/#145-spring-session) implementation +* [Reactive Streams](https://github.com/redisson/redisson/wiki/3.-operations-execution#32-reactive-way) +* [Redis pipelining](https://github.com/redisson/redisson/wiki/10.-additional-features#102-execution-batches-of-commands) (command batches) * Supports Android platform * Supports auto-reconnect * Supports failed to send command auto-retry * Supports OSGi * Supports many popular codecs ([Jackson JSON](https://github.com/FasterXML/jackson), [Avro](http://avro.apache.org/), [Smile](http://wiki.fasterxml.com/SmileFormatSpec), [CBOR](http://cbor.io/), [MsgPack](http://msgpack.org/), [Kryo](https://github.com/EsotericSoftware/kryo), [FST](https://github.com/RuedigerMoeller/fast-serialization), [LZ4](https://github.com/jpountz/lz4-java), [Snappy](https://github.com/xerial/snappy-java) and JDK Serialization) -* With over 900 unit tests +* With over 1000 unit tests -Projects using Redisson +Who uses Redisson ================================ -[Setronica](http://setronica.com/), [Monits](http://monits.com/), [Brookhaven National Laboratory](http://bnl.gov/), [Netflix Dyno client] (https://github.com/Netflix/dyno), [武林Q传](http://www.nbrpg.com/), [Ocous](http://www.ocous.com/), [Invaluable](http://www.invaluable.com/), [Clover](https://www.clover.com/) , [Apache Karaf Decanter](https://karaf.apache.org/projects.html#decanter), [Atmosphere Framework](http://async-io.org/), [BrandsEye](http://brandseye.com), [Datorama](http://datorama.com/), [BrightCloud](http://brightcloud.com/) +[Electronic Arts](http://ea.com), [Baidu](http://baidu.com), [New Relic Synthetics](https://newrelic.com/synthetics), [National Australia Bank](https://www.nab.com.au/), [Brookhaven National Laboratory](http://bnl.gov/), [Singtel](http://singtel.com), [Infor](http://www.infor.com/), [Setronica](http://setronica.com/), [Monits](http://monits.com/), [Netflix Dyno client] (https://github.com/Netflix/dyno), [武林Q传](http://www.nbrpg.com/), [Ocous](http://www.ocous.com/), [Invaluable](http://www.invaluable.com/), [Clover](https://www.clover.com/) , [Apache Karaf Decanter](https://karaf.apache.org/projects.html#decanter), [Atmosphere Framework](http://async-io.org/), [BrandsEye](http://brandseye.com), [Datorama](http://datorama.com/), [BrightCloud](http://brightcloud.com/), [Azar](http://azarlive.com/), [Snapfish](http://snapfish.com), [Crimson Hexagon](http://www.crimsonhexagon.com) Articles ================================ -[Java data structures powered by Redis. Introduction to Redisson (pdf)](http://redisson.org/Redisson.pdf) +[Java data structures powered by Redis. Introduction to Redisson (pdf)](https://redisson.org/Redisson.pdf) +[Redisson PRO vs. Jedis: Which Is Faster?](https://dzone.com/articles/redisson-pro-vs-jedis-which-is-faster) +[A Look at the Java Distributed In-Memory Data Model (Powered by Redis)](https://dzone.com/articles/java-distributed-in-memory-data-model-powered-by-r) [Distributed tasks Execution and Scheduling in Java, powered by Redis](https://dzone.com/articles/distributed-tasks-execution-and-scheduling-in-java) [Introducing Redisson Live Objects (Object Hash Mapping)](https://dzone.com/articles/introducing-redisson-live-object-object-hash-mappi) [Java Remote Method Invocation with Redisson](https://dzone.com/articles/java-remote-method-invocation-with-redisson) [Java Multimaps With Redis](https://dzone.com/articles/multimaps-with-redis) [Distributed lock with Redis](https://evuvatech.com/2016/02/05/distributed-lock-with-redis/) +Success stories +================================ + +[Moving from Hazelcast to Redis](https://engineering.datorama.com/moving-from-hazelcast-to-redis-b90a0769d1cb) + Quick start =============================== #### Maven + + + org.redisson + redisson + 3.3.0 + + org.redisson redisson - 2.4.0 + 2.8.0 + #### Gradle + // JDK 1.8+ compatible + compile 'org.redisson:redisson:3.3.0' + + // JDK 1.6+ compatible + compile 'org.redisson:redisson:2.8.0' - compile 'org.redisson:redisson:2.4.0' - #### Java ```java @@ -105,8 +131,11 @@ RExecutorService executor = redisson.getExecutorService("myExecutorService"); Downloads =============================== -[Redisson 2.4.0](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson&v=2.4.0&e=jar) -[Redisson node 2.4.0](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-all&v=2.4.0&e=jar) +[Redisson 3.3.0](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson&v=3.3.0&e=jar), +[Redisson node 3.3.0](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-all&v=3.3.0&e=jar) + +[Redisson 2.8.0](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson&v=2.8.0&e=jar), +[Redisson node 2.8.0](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-all&v=2.8.0&e=jar) ### Supported by diff --git a/pom.xml b/pom.xml index f0bf4e7d4..960251c48 100644 --- a/pom.xml +++ b/pom.xml @@ -3,7 +3,7 @@ org.redisson redisson-parent - 2.5.0-SNAPSHOT + 2.8.1-SNAPSHOT pom Redisson @@ -16,11 +16,18 @@ http://redisson.org/ + + true + 1.6 + 1.8 + UTF-8 + + - scm:git:git@github.com:mrniko/redisson.git - scm:git:git@github.com:mrniko/redisson.git - scm:git:git@github.com:mrniko/redisson.git - redisson-parent-0.9.0 + scm:git:git@github.com:redisson/redisson.git + scm:git:git@github.com:redisson/redisson.git + scm:git:git@github.com:redisson/redisson.git + HEAD @@ -51,6 +58,7 @@ redisson redisson-all + redisson-tomcat diff --git a/redisson-all/pom.xml b/redisson-all/pom.xml index c7116ea62..f2b1e9fa2 100644 --- a/redisson-all/pom.xml +++ b/redisson-all/pom.xml @@ -4,7 +4,7 @@ org.redisson redisson-parent - 2.5.0-SNAPSHOT + 2.8.1-SNAPSHOT ../ @@ -87,7 +87,7 @@ io.netty netty-transport-native-epoll linux-x86_64 - 4.0.41.Final + 4.1.8.Final com.esotericsoftware diff --git a/redisson-tomcat/README.md b/redisson-tomcat/README.md new file mode 100644 index 000000000..ff6bad985 --- /dev/null +++ b/redisson-tomcat/README.md @@ -0,0 +1,42 @@ +Redis based Tomcat Session Manager +=== + +Implements non-sticky session management backed by Redis. +Supports Tomcat 6.x, 7.x, 8.x + +Advantages +=== + +Current implementation differs from any other Tomcat Session Manager in terms of efficient storage and optimized writes. Each session attribute is written into Redis during each `HttpSession.setAttribute` invocation. While other solutions serialize whole session each time. + +Usage +=== +1. Add `RedissonSessionManager` into `context.xml` + ```xml + + ``` + `configPath` - path to Redisson JSON or YAML config. See [configuration wiki page](https://github.com/redisson/redisson/wiki/2.-Configuration) for more details. + +2. Copy two jars into `TOMCAT_BASE/lib` directory: + + 1. __For JDK 1.8+__ + [redisson-all-3.3.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-all&v=3.3.0&e=jar) + + for Tomcat 6.x + [redisson-tomcat-6-3.3.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-tomcat-6&v=3.3.0&e=jar) + for Tomcat 7.x + [redisson-tomcat-7-3.3.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-tomcat-7&v=3.3.0&e=jar) + for Tomcat 8.x + [redisson-tomcat-8-3.3.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-tomcat-8&v=3.3.0&e=jar) + + 1. __For JDK 1.6+__ + [redisson-all-2.8.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-all&v=2.8.0&e=jar) + + for Tomcat 6.x + [redisson-tomcat-6-2.8.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-tomcat-6&v=2.8.0&e=jar) + for Tomcat 7.x + [redisson-tomcat-7-2.8.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-tomcat-7&v=2.8.0&e=jar) + for Tomcat 8.x + [redisson-tomcat-8-2.8.0.jar](https://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=org.redisson&a=redisson-tomcat-8&v=2.8.0&e=jar) + diff --git a/redisson-tomcat/pom.xml b/redisson-tomcat/pom.xml new file mode 100644 index 000000000..93169d2af --- /dev/null +++ b/redisson-tomcat/pom.xml @@ -0,0 +1,86 @@ + + 4.0.0 + + + org.redisson + redisson-parent + 2.8.1-SNAPSHOT + ../ + + + redisson-tomcat + pom + + Redisson/Tomcat + + + redisson-tomcat-6 + redisson-tomcat-7 + redisson-tomcat-8 + + + + + + maven-compiler-plugin + 3.5.1 + + ${source.version} + ${source.version} + true + true + + + + default-testCompile + process-test-sources + + testCompile + + + true + ${test.source.version} + ${test.source.version} + + + + + + + org.apache.maven.plugins + maven-javadoc-plugin + 2.10.4 + + + attach-javadocs + + jar + + + + + + + + + + org.redisson + redisson + ${project.version} + + + + org.apache.httpcomponents + fluent-hc + 4.5.2 + test + + + junit + junit + 4.12 + test + + + + diff --git a/redisson-tomcat/redisson-tomcat-6/pom.xml b/redisson-tomcat/redisson-tomcat-6/pom.xml new file mode 100644 index 000000000..3e87203a9 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/pom.xml @@ -0,0 +1,64 @@ + + 4.0.0 + + + org.redisson + redisson-tomcat + 2.8.1-SNAPSHOT + ../ + + + redisson-tomcat-6 + jar + + Redisson/Tomcat-6 + + + + org.apache.tomcat + catalina + 6.0.48 + provided + + + + + + + com.mycila + license-maven-plugin + 3.0 + + ${basedir} +
${basedir}/../../header.txt
+ false + true + false + + src/main/java/org/redisson/ + + + target/** + + true + + JAVADOC_STYLE + + true + true + UTF-8 +
+ + + + check + + + +
+ +
+
+ + +
diff --git a/redisson-tomcat/redisson-tomcat-6/src/main/java/org/redisson/tomcat/RedissonSession.java b/redisson-tomcat/redisson-tomcat-6/src/main/java/org/redisson/tomcat/RedissonSession.java new file mode 100644 index 000000000..a8981b95b --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/main/java/org/redisson/tomcat/RedissonSession.java @@ -0,0 +1,187 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.tomcat; + +import java.lang.reflect.Field; +import java.util.HashMap; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.catalina.session.StandardSession; +import org.redisson.api.RMap; + +/** + * Redisson Session object for Apache Tomcat + * + * @author Nikita Koksharov + * + */ +public class RedissonSession extends StandardSession { + + private final RedissonSessionManager redissonManager; + private final Map attrs; + private RMap map; + + public RedissonSession(RedissonSessionManager manager) { + super(manager); + this.redissonManager = manager; + + try { + Field attr = StandardSession.class.getDeclaredField("attributes"); + attrs = (Map) attr.get(this); + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + private static final long serialVersionUID = -2518607181636076487L; + + @Override + public void setId(String id, boolean notify) { + super.setId(id, notify); + map = redissonManager.getMap(id); + } + + @Override + public void setCreationTime(long time) { + super.setCreationTime(time); + + if (map != null) { + Map newMap = new HashMap(3); + newMap.put("session:creationTime", creationTime); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + map.putAll(newMap); + } + } + + @Override + public void access() { + super.access(); + + if (map != null) { + Map newMap = new HashMap(2); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + map.putAll(newMap); + if (getMaxInactiveInterval() >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + } + + @Override + public void setMaxInactiveInterval(int interval) { + super.setMaxInactiveInterval(interval); + + if (map != null) { + map.fastPut("session:maxInactiveInterval", maxInactiveInterval); + if (maxInactiveInterval >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + } + + @Override + public void setValid(boolean isValid) { + super.setValid(isValid); + + if (map != null) { + map.fastPut("session:isValid", isValid); + } + } + + @Override + public void setNew(boolean isNew) { + super.setNew(isNew); + + if (map != null) { + map.fastPut("session:isNew", isNew); + } + } + + @Override + public void endAccess() { + boolean oldValue = isNew; + super.endAccess(); + + if (isNew != oldValue) { + map.fastPut("session:isNew", isNew); + } + } + + @Override + public void setAttribute(String name, Object value, boolean notify) { + super.setAttribute(name, value, notify); + + if (map != null && value != null) { + map.fastPut(name, value); + } + } + + @Override + protected void removeAttributeInternal(String name, boolean notify) { + super.removeAttributeInternal(name, notify); + + if (map != null) { + map.fastRemove(name); + } + } + + public void save() { + Map newMap = new HashMap(); + newMap.put("session:creationTime", creationTime); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + newMap.put("session:maxInactiveInterval", maxInactiveInterval); + newMap.put("session:isValid", isValid); + newMap.put("session:isNew", isNew); + + for (Entry entry : attrs.entrySet()) { + newMap.put(entry.getKey(), entry.getValue()); + } + + map.putAll(newMap); + + if (maxInactiveInterval >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + + public void load() { + Set> entrySet = map.readAllEntrySet(); + for (Entry entry : entrySet) { + if ("session:creationTime".equals(entry.getKey())) { + creationTime = (Long) entry.getValue(); + } else if ("session:lastAccessedTime".equals(entry.getKey())) { + lastAccessedTime = (Long) entry.getValue(); + } else if ("session:thisAccessedTime".equals(entry.getKey())) { + thisAccessedTime = (Long) entry.getValue(); + } else if ("session:maxInactiveInterval".equals(entry.getKey())) { + maxInactiveInterval = (Integer) entry.getValue(); + } else if ("session:isValid".equals(entry.getKey())) { + isValid = (Boolean) entry.getValue(); + } else if ("session:isNew".equals(entry.getKey())) { + isNew = (Boolean) entry.getValue(); + } else { + setAttribute(entry.getKey(), entry.getValue(), false); + } + } + } + +} diff --git a/redisson-tomcat/redisson-tomcat-6/src/main/java/org/redisson/tomcat/RedissonSessionManager.java b/redisson-tomcat/redisson-tomcat-6/src/main/java/org/redisson/tomcat/RedissonSessionManager.java new file mode 100644 index 000000000..6399320c0 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/main/java/org/redisson/tomcat/RedissonSessionManager.java @@ -0,0 +1,179 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.tomcat; + +import java.io.File; +import java.io.IOException; + +import org.apache.juli.logging.Log; +import org.apache.juli.logging.LogFactory; +import org.apache.catalina.Context; +import org.apache.catalina.Lifecycle; +import org.apache.catalina.LifecycleException; +import org.apache.catalina.LifecycleListener; +import org.apache.catalina.Session; +import org.apache.catalina.session.ManagerBase; +import org.apache.catalina.util.LifecycleSupport; +import org.redisson.Redisson; +import org.redisson.api.RMap; +import org.redisson.api.RedissonClient; +import org.redisson.config.Config; + +/** + * Redisson Session Manager for Apache Tomcat + * + * @author Nikita Koksharov + * + */ +public class RedissonSessionManager extends ManagerBase implements Lifecycle { + + private final Log log = LogFactory.getLog(RedissonSessionManager.class); + + protected LifecycleSupport lifecycle = new LifecycleSupport(this); + + private RedissonClient redisson; + private String configPath; + + public void setConfigPath(String configPath) { + this.configPath = configPath; + } + + public String getConfigPath() { + return configPath; + } + + @Override + public int getRejectedSessions() { + return 0; + } + + @Override + public void load() throws ClassNotFoundException, IOException { + } + + @Override + public void setRejectedSessions(int sessions) { + } + + @Override + public void unload() throws IOException { + } + + @Override + public void addLifecycleListener(LifecycleListener listener) { + lifecycle.addLifecycleListener(listener); + } + + @Override + public LifecycleListener[] findLifecycleListeners() { + return lifecycle.findLifecycleListeners(); + } + + @Override + public void removeLifecycleListener(LifecycleListener listener) { + lifecycle.removeLifecycleListener(listener); + } + + @Override + public Session createSession(String sessionId) { + RedissonSession session = (RedissonSession) createEmptySession(); + + session.setNew(true); + session.setValid(true); + session.setCreationTime(System.currentTimeMillis()); + session.setMaxInactiveInterval(((Context) getContainer()).getSessionTimeout() * 60); + + if (sessionId == null) { + sessionId = generateSessionId(); + } + + session.setId(sessionId); + session.save(); + + return session; + } + + public RMap getMap(String sessionId) { + return redisson.getMap("redisson_tomcat_session:" + sessionId); + } + + @Override + public Session findSession(String id) throws IOException { + Session result = super.findSession(id); + if (result == null && id != null) { + RedissonSession session = (RedissonSession) createEmptySession(); + session.setId(id); + session.load(); + return session; + } + + return result; + } + + @Override + public Session createEmptySession() { + return new RedissonSession(this); + } + + @Override + public void remove(Session session) { + super.remove(session); + + getMap(session.getId()).delete(); + } + + public RedissonClient getRedisson() { + return redisson; + } + + @Override + public void start() throws LifecycleException { + Config config = null; + try { + config = Config.fromJSON(new File(configPath)); + } catch (IOException e) { + // trying next format + try { + config = Config.fromYAML(new File(configPath)); + } catch (IOException e1) { + log.error("Can't parse json config " + configPath, e); + throw new LifecycleException("Can't parse yaml config " + configPath, e1); + } + } + + try { + redisson = Redisson.create(config); + } catch (Exception e) { + throw new LifecycleException(e); + } + + lifecycle.fireLifecycleEvent(START_EVENT, null); + } + + @Override + public void stop() throws LifecycleException { + try { + if (redisson != null) { + redisson.shutdown(); + } + } catch (Exception e) { + throw new LifecycleException(e); + } + + lifecycle.fireLifecycleEvent(STOP_EVENT, null); + } + +} diff --git a/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java b/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java new file mode 100644 index 000000000..dd8703be4 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java @@ -0,0 +1,153 @@ +package org.redisson.tomcat; + +import java.io.IOException; + +import org.apache.catalina.LifecycleException; +import org.apache.http.client.ClientProtocolException; +import org.apache.http.client.fluent.Executor; +import org.apache.http.client.fluent.Request; +import org.apache.http.cookie.Cookie; +import org.apache.http.impl.client.BasicCookieStore; +import org.junit.Assert; +import org.junit.Test; + +public class RedissonSessionManagerTest { + + @Test + public void testSwitchServer() throws LifecycleException, InterruptedException, ClientProtocolException, IOException { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "/src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + + Executor.closeIdleConnections(); + server.stop(); + + server = new TomcatServer("myapp", 8080, "/src/test/"); + server.start(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testWriteReadRemove() throws LifecycleException, InterruptedException, ClientProtocolException, IOException { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "/src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1234"); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testRecreate() throws LifecycleException, InterruptedException, ClientProtocolException, IOException { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "/src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + recreate(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testUpdate() throws LifecycleException, InterruptedException, ClientProtocolException, IOException { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "/src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + read(executor, "test", "1"); + write(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testInvalidate() throws LifecycleException, InterruptedException, ClientProtocolException, IOException { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "/src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + invalidate(executor); + + Executor.closeIdleConnections(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + private void write(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/write?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void read(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/read?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void remove(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/remove?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void invalidate(Executor executor) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/invalidate"; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void recreate(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/recreate?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + +} diff --git a/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/TestServlet.java b/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/TestServlet.java new file mode 100644 index 000000000..1c68da965 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/TestServlet.java @@ -0,0 +1,94 @@ +package org.redisson.tomcat; + +import java.io.IOException; + +import javax.servlet.ServletException; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import javax.servlet.http.HttpSession; + +public class TestServlet extends HttpServlet { + + private static final long serialVersionUID = 1243830648280853203L; + + @Override + protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { + HttpSession session = req.getSession(); + + if (req.getPathInfo().equals("/write")) { + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/read")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + Object attr = session.getAttribute(key); + resp.getWriter().print(attr); + } else if (req.getPathInfo().equals("/remove")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + session.removeAttribute(key); + resp.getWriter().print(String.valueOf(session.getAttribute(key))); + } else if (req.getPathInfo().equals("/invalidate")) { + session.invalidate(); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/recreate")) { + session.invalidate(); + + session = req.getSession(); + + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } + } + +} diff --git a/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/TomcatServer.java b/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/TomcatServer.java new file mode 100644 index 000000000..62c0525fb --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/test/java/org/redisson/tomcat/TomcatServer.java @@ -0,0 +1,109 @@ +package org.redisson.tomcat; + +import org.apache.catalina.Engine; +import org.apache.catalina.Host; +import org.apache.catalina.LifecycleException; +import org.apache.catalina.connector.Connector; +import org.apache.catalina.core.StandardContext; +import org.apache.catalina.startup.Embedded; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class TomcatServer { + + private Embedded server; + private int port; + private boolean isRunning; + + private static final Logger LOG = LoggerFactory.getLogger(TomcatServer.class); + private static final boolean isInfo = LOG.isInfoEnabled(); + + + /** + * Create a new Tomcat embedded server instance. Setup looks like: + *

+     *    
+     *        
+     *        
+     *                
+     *            
+     *        
+     *    
+     *
+ * & will be created automcatically. We need to hook the remaining to an {@link Embedded} instnace + * @param contextPath Context path for the application + * @param port Port number to be used for the embedded Tomcat server + * @param appBase Path to the Application files (for Maven based web apps, in general: /src/main/) + * @param shutdownHook If true, registers a server' shutdown hook with JVM. This is useful to shutdown the server + * in erroneous cases. + * @throws Exception + */ + public TomcatServer(String contextPath, int port, String appBase) { + if(contextPath == null || appBase == null || appBase.length() == 0) { + throw new IllegalArgumentException("Context path or appbase should not be null"); + } + if(!contextPath.startsWith("/")) { + contextPath = "/" + contextPath; + } + + this.port = port; + + server = new Embedded(); + server.setName("TomcatEmbeddedServer"); + + Host localHost = server.createHost("localhost", appBase); + localHost.setAutoDeploy(false); + + StandardContext rootContext = (StandardContext) server.createContext(contextPath, "webapp"); + rootContext.setDefaultWebXml("web.xml"); + localHost.addChild(rootContext); + + Engine engine = server.createEngine(); + engine.setDefaultHost(localHost.getName()); + engine.setName("TomcatEngine"); + engine.addChild(localHost); + + server.addEngine(engine); + + Connector connector = server.createConnector(localHost.getName(), port, false); + server.addConnector(connector); + + } + + /** + * Start the tomcat embedded server + */ + public void start() throws LifecycleException { + if(isRunning) { + LOG.warn("Tomcat server is already running @ port={}; ignoring the start", port); + return; + } + + if(isInfo) LOG.info("Starting the Tomcat server @ port={}", port); + + server.setAwait(true); + server.start(); + isRunning = true; + } + + /** + * Stop the tomcat embedded server + */ + public void stop() throws LifecycleException { + if(!isRunning) { + LOG.warn("Tomcat server is not running @ port={}", port); + return; + } + + if(isInfo) LOG.info("Stopping the Tomcat server"); + + server.stop(); + isRunning = false; + } + + public boolean isRunning() { + return isRunning; + } + +} \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-6/src/test/webapp/META-INF/context.xml b/redisson-tomcat/redisson-tomcat-6/src/test/webapp/META-INF/context.xml new file mode 100644 index 000000000..52b0eafd8 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/test/webapp/META-INF/context.xml @@ -0,0 +1,7 @@ + + + + + + \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-6/src/test/webapp/WEB-INF/redisson.yaml b/redisson-tomcat/redisson-tomcat-6/src/test/webapp/WEB-INF/redisson.yaml new file mode 100644 index 000000000..28466f3ba --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/test/webapp/WEB-INF/redisson.yaml @@ -0,0 +1,2 @@ +singleServerConfig: + address: "redis://127.0.0.1:6379" \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-6/src/test/webapp/WEB-INF/web.xml b/redisson-tomcat/redisson-tomcat-6/src/test/webapp/WEB-INF/web.xml new file mode 100644 index 000000000..5940ccb8a --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-6/src/test/webapp/WEB-INF/web.xml @@ -0,0 +1,22 @@ + + + + + testServlet + org.redisson.tomcat.TestServlet + 1 + + + + testServlet + /* + + + + 30 + + + diff --git a/redisson-tomcat/redisson-tomcat-7/pom.xml b/redisson-tomcat/redisson-tomcat-7/pom.xml new file mode 100644 index 000000000..e6b67b474 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/pom.xml @@ -0,0 +1,82 @@ + + 4.0.0 + + + org.redisson + redisson-tomcat + 2.8.1-SNAPSHOT + ../ + + + redisson-tomcat-7 + jar + + Redisson/Tomcat-7 + + + + org.apache.tomcat.embed + tomcat-embed-core + 7.0.73 + provided + + + org.apache.tomcat.embed + tomcat-embed-logging-juli + 7.0.73 + provided + + + org.apache.tomcat.embed + tomcat-embed-jasper + 7.0.73 + provided + + + org.apache.tomcat + tomcat-jasper + 7.0.73 + provided + + + + + + + com.mycila + license-maven-plugin + 3.0 + + ${basedir} +
${basedir}/../../header.txt
+ false + true + false + + src/main/java/org/redisson/ + + + target/** + + true + + JAVADOC_STYLE + + true + true + UTF-8 +
+ + + + check + + + +
+ +
+
+ + +
diff --git a/redisson-tomcat/redisson-tomcat-7/src/main/java/org/redisson/tomcat/RedissonSession.java b/redisson-tomcat/redisson-tomcat-7/src/main/java/org/redisson/tomcat/RedissonSession.java new file mode 100644 index 000000000..31db2671f --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/main/java/org/redisson/tomcat/RedissonSession.java @@ -0,0 +1,186 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.tomcat; + +import java.lang.reflect.Field; +import java.util.HashMap; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.catalina.session.StandardSession; +import org.redisson.api.RMap; + +/** + * Redisson Session object for Apache Tomcat + * + * @author Nikita Koksharov + * + */ +public class RedissonSession extends StandardSession { + + private final RedissonSessionManager redissonManager; + private final Map attrs; + private RMap map; + + public RedissonSession(RedissonSessionManager manager) { + super(manager); + this.redissonManager = manager; + try { + Field attr = StandardSession.class.getDeclaredField("attributes"); + attrs = (Map) attr.get(this); + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + private static final long serialVersionUID = -2518607181636076487L; + + @Override + public void setId(String id, boolean notify) { + super.setId(id, notify); + map = redissonManager.getMap(id); + } + + @Override + public void setCreationTime(long time) { + super.setCreationTime(time); + + if (map != null) { + Map newMap = new HashMap(3); + newMap.put("session:creationTime", creationTime); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + map.putAll(newMap); + } + } + + @Override + public void access() { + super.access(); + + if (map != null) { + Map newMap = new HashMap(2); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + map.putAll(newMap); + if (getMaxInactiveInterval() >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + } + + @Override + public void setMaxInactiveInterval(int interval) { + super.setMaxInactiveInterval(interval); + + if (map != null) { + map.fastPut("session:maxInactiveInterval", maxInactiveInterval); + if (maxInactiveInterval >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + } + + @Override + public void setValid(boolean isValid) { + super.setValid(isValid); + + if (map != null) { + map.fastPut("session:isValid", isValid); + } + } + + @Override + public void setNew(boolean isNew) { + super.setNew(isNew); + + if (map != null) { + map.fastPut("session:isNew", isNew); + } + } + + @Override + public void endAccess() { + boolean oldValue = isNew; + super.endAccess(); + + if (isNew != oldValue) { + map.fastPut("session:isNew", isNew); + } + } + + @Override + public void setAttribute(String name, Object value, boolean notify) { + super.setAttribute(name, value, notify); + + if (map != null && value != null) { + map.fastPut(name, value); + } + } + + @Override + protected void removeAttributeInternal(String name, boolean notify) { + super.removeAttributeInternal(name, notify); + + if (map != null) { + map.fastRemove(name); + } + } + + public void save() { + Map newMap = new HashMap(); + newMap.put("session:creationTime", creationTime); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + newMap.put("session:maxInactiveInterval", maxInactiveInterval); + newMap.put("session:isValid", isValid); + newMap.put("session:isNew", isNew); + + for (Entry entry : attrs.entrySet()) { + newMap.put(entry.getKey(), entry.getValue()); + } + + map.putAll(newMap); + + if (maxInactiveInterval >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + + public void load() { + Set> entrySet = map.readAllEntrySet(); + for (Entry entry : entrySet) { + if ("session:creationTime".equals(entry.getKey())) { + creationTime = (Long) entry.getValue(); + } else if ("session:lastAccessedTime".equals(entry.getKey())) { + lastAccessedTime = (Long) entry.getValue(); + } else if ("session:thisAccessedTime".equals(entry.getKey())) { + thisAccessedTime = (Long) entry.getValue(); + } else if ("session:maxInactiveInterval".equals(entry.getKey())) { + maxInactiveInterval = (Integer) entry.getValue(); + } else if ("session:isValid".equals(entry.getKey())) { + isValid = (Boolean) entry.getValue(); + } else if ("session:isNew".equals(entry.getKey())) { + isNew = (Boolean) entry.getValue(); + } else { + setAttribute(entry.getKey(), entry.getValue(), false); + } + } + } + +} diff --git a/redisson-tomcat/redisson-tomcat-7/src/main/java/org/redisson/tomcat/RedissonSessionManager.java b/redisson-tomcat/redisson-tomcat-7/src/main/java/org/redisson/tomcat/RedissonSessionManager.java new file mode 100644 index 000000000..87d41f81a --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/main/java/org/redisson/tomcat/RedissonSessionManager.java @@ -0,0 +1,185 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.tomcat; + +import java.io.File; +import java.io.IOException; + +import org.apache.catalina.Context; +import org.apache.catalina.Lifecycle; +import org.apache.catalina.LifecycleException; +import org.apache.catalina.LifecycleListener; +import org.apache.catalina.LifecycleState; +import org.apache.catalina.Session; +import org.apache.catalina.session.ManagerBase; +import org.apache.catalina.util.LifecycleSupport; +import org.apache.juli.logging.Log; +import org.apache.juli.logging.LogFactory; +import org.redisson.Redisson; +import org.redisson.api.RMap; +import org.redisson.api.RedissonClient; +import org.redisson.config.Config; + +/** + * Redisson Session Manager for Apache Tomcat + * + * @author Nikita Koksharov + * + */ +public class RedissonSessionManager extends ManagerBase implements Lifecycle { + + private final Log log = LogFactory.getLog(RedissonSessionManager.class); + + protected LifecycleSupport lifecycle = new LifecycleSupport(this); + + private RedissonClient redisson; + private String configPath; + + public void setConfigPath(String configPath) { + this.configPath = configPath; + } + + public String getConfigPath() { + return configPath; + } + + @Override + public String getName() { + return RedissonSessionManager.class.getSimpleName(); + } + + @Override + public int getRejectedSessions() { + return 0; + } + + @Override + public void load() throws ClassNotFoundException, IOException { + } + + @Override + public void unload() throws IOException { + } + + @Override + public void addLifecycleListener(LifecycleListener listener) { + lifecycle.addLifecycleListener(listener); + } + + @Override + public LifecycleListener[] findLifecycleListeners() { + return lifecycle.findLifecycleListeners(); + } + + @Override + public void removeLifecycleListener(LifecycleListener listener) { + lifecycle.removeLifecycleListener(listener); + } + + @Override + public Session createSession(String sessionId) { + RedissonSession session = (RedissonSession) createEmptySession(); + + session.setNew(true); + session.setValid(true); + session.setCreationTime(System.currentTimeMillis()); + session.setMaxInactiveInterval(((Context) getContainer()).getSessionTimeout() * 60); + + if (sessionId == null) { + sessionId = generateSessionId(); + } + + session.setId(sessionId); + session.save(); + + return session; + } + + public RMap getMap(String sessionId) { + return redisson.getMap("redisson_tomcat_session:" + sessionId); + } + + @Override + public Session findSession(String id) throws IOException { + Session result = super.findSession(id); + if (result == null && id != null) { + RedissonSession session = (RedissonSession) createEmptySession(); + session.setId(id); + session.load(); + return session; + } + + return result; + } + + @Override + public Session createEmptySession() { + return new RedissonSession(this); + } + + @Override + public void remove(Session session) { + super.remove(session); + + getMap(session.getId()).delete(); + } + + public RedissonClient getRedisson() { + return redisson; + } + + @Override + protected void startInternal() throws LifecycleException { + super.startInternal(); + Config config = null; + try { + config = Config.fromJSON(new File(configPath)); + } catch (IOException e) { + // trying next format + try { + config = Config.fromYAML(new File(configPath)); + } catch (IOException e1) { + log.error("Can't parse json config " + configPath, e); + throw new LifecycleException("Can't parse yaml config " + configPath, e1); + } + } + + try { + redisson = Redisson.create(config); + } catch (Exception e) { + throw new LifecycleException(e); + } + + setState(LifecycleState.STARTING); + } + + @Override + protected void stopInternal() throws LifecycleException { + super.stopInternal(); + + setState(LifecycleState.STOPPING); + + try { + if (redisson != null) { + redisson.shutdown(); + } + } catch (Exception e) { + throw new LifecycleException(e); + } + + } + +} diff --git a/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java b/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java new file mode 100644 index 000000000..2d260c587 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java @@ -0,0 +1,152 @@ +package org.redisson.tomcat; + +import java.io.IOException; + +import org.apache.http.client.ClientProtocolException; +import org.apache.http.client.fluent.Executor; +import org.apache.http.client.fluent.Request; +import org.apache.http.cookie.Cookie; +import org.apache.http.impl.client.BasicCookieStore; +import org.junit.Assert; +import org.junit.Test; + +public class RedissonSessionManagerTest { + + @Test + public void testSwitchServer() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + + Executor.closeIdleConnections(); + server.stop(); + + server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testWriteReadRemove() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1234"); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testRecreate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + recreate(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testUpdate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + read(executor, "test", "1"); + write(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testInvalidate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + invalidate(executor); + + Executor.closeIdleConnections(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + private void write(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/write?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void read(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/read?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void remove(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/remove?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void invalidate(Executor executor) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/invalidate"; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void recreate(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/recreate?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + +} diff --git a/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/TestServlet.java b/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/TestServlet.java new file mode 100644 index 000000000..1c68da965 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/TestServlet.java @@ -0,0 +1,94 @@ +package org.redisson.tomcat; + +import java.io.IOException; + +import javax.servlet.ServletException; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import javax.servlet.http.HttpSession; + +public class TestServlet extends HttpServlet { + + private static final long serialVersionUID = 1243830648280853203L; + + @Override + protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { + HttpSession session = req.getSession(); + + if (req.getPathInfo().equals("/write")) { + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/read")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + Object attr = session.getAttribute(key); + resp.getWriter().print(attr); + } else if (req.getPathInfo().equals("/remove")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + session.removeAttribute(key); + resp.getWriter().print(String.valueOf(session.getAttribute(key))); + } else if (req.getPathInfo().equals("/invalidate")) { + session.invalidate(); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/recreate")) { + session.invalidate(); + + session = req.getSession(); + + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } + } + +} diff --git a/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/TomcatServer.java b/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/TomcatServer.java new file mode 100644 index 000000000..6ec7af3b6 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/test/java/org/redisson/tomcat/TomcatServer.java @@ -0,0 +1,65 @@ +package org.redisson.tomcat; + +import java.net.MalformedURLException; + +import javax.servlet.ServletException; + +import org.apache.catalina.LifecycleException; +import org.apache.catalina.startup.Tomcat; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class TomcatServer { + + private Tomcat tomcat = new Tomcat(); + private int port; + private boolean isRunning; + + private static final Logger LOG = LoggerFactory.getLogger(TomcatServer.class); + private static final boolean isInfo = LOG.isInfoEnabled(); + + public TomcatServer(String contextPath, int port, String appBase) throws MalformedURLException, ServletException { + if(contextPath == null || appBase == null || appBase.length() == 0) { + throw new IllegalArgumentException("Context path or appbase should not be null"); + } + if(!contextPath.startsWith("/")) { + contextPath = "/" + contextPath; + } + + tomcat.setBaseDir("."); // location where temp dir is created + tomcat.setPort(port); + tomcat.getHost().setAppBase("."); + + tomcat.addWebapp(contextPath, appBase + "webapp"); + } + + /** + * Start the tomcat embedded server + */ + public void start() throws LifecycleException { + tomcat.start(); + isRunning = true; + } + + /** + * Stop the tomcat embedded server + */ + public void stop() throws LifecycleException { + if(!isRunning) { + LOG.warn("Tomcat server is not running @ port={}", port); + return; + } + + if(isInfo) LOG.info("Stopping the Tomcat server"); + + tomcat.stop(); + tomcat.destroy(); + tomcat.getServer().await(); + isRunning = false; + } + + public boolean isRunning() { + return isRunning; + } + +} \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-7/src/test/webapp/META-INF/context.xml b/redisson-tomcat/redisson-tomcat-7/src/test/webapp/META-INF/context.xml new file mode 100644 index 000000000..52b0eafd8 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/test/webapp/META-INF/context.xml @@ -0,0 +1,7 @@ + + + + + + \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-7/src/test/webapp/WEB-INF/redisson.yaml b/redisson-tomcat/redisson-tomcat-7/src/test/webapp/WEB-INF/redisson.yaml new file mode 100644 index 000000000..28466f3ba --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/test/webapp/WEB-INF/redisson.yaml @@ -0,0 +1,2 @@ +singleServerConfig: + address: "redis://127.0.0.1:6379" \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-7/src/test/webapp/WEB-INF/web.xml b/redisson-tomcat/redisson-tomcat-7/src/test/webapp/WEB-INF/web.xml new file mode 100644 index 000000000..5940ccb8a --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-7/src/test/webapp/WEB-INF/web.xml @@ -0,0 +1,22 @@ + + + + + testServlet + org.redisson.tomcat.TestServlet + 1 + + + + testServlet + /* + + + + 30 + + + diff --git a/redisson-tomcat/redisson-tomcat-8/pom.xml b/redisson-tomcat/redisson-tomcat-8/pom.xml new file mode 100644 index 000000000..448561403 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/pom.xml @@ -0,0 +1,81 @@ + + 4.0.0 + + + org.redisson + redisson-tomcat + 2.8.1-SNAPSHOT + ../ + + + redisson-tomcat-8 + jar + + Redisson/Tomcat-8 + + + + org.apache.tomcat.embed + tomcat-embed-core + 8.0.39 + provided + + + org.apache.tomcat.embed + tomcat-embed-logging-juli + 8.0.39 + provided + + + org.apache.tomcat.embed + tomcat-embed-jasper + 8.0.39 + provided + + + org.apache.tomcat + tomcat-jasper + 8.0.39 + provided + + + + + + + com.mycila + license-maven-plugin + 3.0 + + ${basedir} +
${basedir}/../../header.txt
+ false + true + false + + src/main/java/org/redisson/ + + + target/** + + true + + JAVADOC_STYLE + + true + true + UTF-8 +
+ + + + check + + + +
+ +
+
+ +
diff --git a/redisson-tomcat/redisson-tomcat-8/src/main/java/org/redisson/tomcat/RedissonSession.java b/redisson-tomcat/redisson-tomcat-8/src/main/java/org/redisson/tomcat/RedissonSession.java new file mode 100644 index 000000000..a8981b95b --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/main/java/org/redisson/tomcat/RedissonSession.java @@ -0,0 +1,187 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.tomcat; + +import java.lang.reflect.Field; +import java.util.HashMap; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.apache.catalina.session.StandardSession; +import org.redisson.api.RMap; + +/** + * Redisson Session object for Apache Tomcat + * + * @author Nikita Koksharov + * + */ +public class RedissonSession extends StandardSession { + + private final RedissonSessionManager redissonManager; + private final Map attrs; + private RMap map; + + public RedissonSession(RedissonSessionManager manager) { + super(manager); + this.redissonManager = manager; + + try { + Field attr = StandardSession.class.getDeclaredField("attributes"); + attrs = (Map) attr.get(this); + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + private static final long serialVersionUID = -2518607181636076487L; + + @Override + public void setId(String id, boolean notify) { + super.setId(id, notify); + map = redissonManager.getMap(id); + } + + @Override + public void setCreationTime(long time) { + super.setCreationTime(time); + + if (map != null) { + Map newMap = new HashMap(3); + newMap.put("session:creationTime", creationTime); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + map.putAll(newMap); + } + } + + @Override + public void access() { + super.access(); + + if (map != null) { + Map newMap = new HashMap(2); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + map.putAll(newMap); + if (getMaxInactiveInterval() >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + } + + @Override + public void setMaxInactiveInterval(int interval) { + super.setMaxInactiveInterval(interval); + + if (map != null) { + map.fastPut("session:maxInactiveInterval", maxInactiveInterval); + if (maxInactiveInterval >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + } + + @Override + public void setValid(boolean isValid) { + super.setValid(isValid); + + if (map != null) { + map.fastPut("session:isValid", isValid); + } + } + + @Override + public void setNew(boolean isNew) { + super.setNew(isNew); + + if (map != null) { + map.fastPut("session:isNew", isNew); + } + } + + @Override + public void endAccess() { + boolean oldValue = isNew; + super.endAccess(); + + if (isNew != oldValue) { + map.fastPut("session:isNew", isNew); + } + } + + @Override + public void setAttribute(String name, Object value, boolean notify) { + super.setAttribute(name, value, notify); + + if (map != null && value != null) { + map.fastPut(name, value); + } + } + + @Override + protected void removeAttributeInternal(String name, boolean notify) { + super.removeAttributeInternal(name, notify); + + if (map != null) { + map.fastRemove(name); + } + } + + public void save() { + Map newMap = new HashMap(); + newMap.put("session:creationTime", creationTime); + newMap.put("session:lastAccessedTime", lastAccessedTime); + newMap.put("session:thisAccessedTime", thisAccessedTime); + newMap.put("session:maxInactiveInterval", maxInactiveInterval); + newMap.put("session:isValid", isValid); + newMap.put("session:isNew", isNew); + + for (Entry entry : attrs.entrySet()) { + newMap.put(entry.getKey(), entry.getValue()); + } + + map.putAll(newMap); + + if (maxInactiveInterval >= 0) { + map.expire(getMaxInactiveInterval(), TimeUnit.SECONDS); + } + } + + public void load() { + Set> entrySet = map.readAllEntrySet(); + for (Entry entry : entrySet) { + if ("session:creationTime".equals(entry.getKey())) { + creationTime = (Long) entry.getValue(); + } else if ("session:lastAccessedTime".equals(entry.getKey())) { + lastAccessedTime = (Long) entry.getValue(); + } else if ("session:thisAccessedTime".equals(entry.getKey())) { + thisAccessedTime = (Long) entry.getValue(); + } else if ("session:maxInactiveInterval".equals(entry.getKey())) { + maxInactiveInterval = (Integer) entry.getValue(); + } else if ("session:isValid".equals(entry.getKey())) { + isValid = (Boolean) entry.getValue(); + } else if ("session:isNew".equals(entry.getKey())) { + isNew = (Boolean) entry.getValue(); + } else { + setAttribute(entry.getKey(), entry.getValue(), false); + } + } + } + +} diff --git a/redisson-tomcat/redisson-tomcat-8/src/main/java/org/redisson/tomcat/RedissonSessionManager.java b/redisson-tomcat/redisson-tomcat-8/src/main/java/org/redisson/tomcat/RedissonSessionManager.java new file mode 100644 index 000000000..87d41f81a --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/main/java/org/redisson/tomcat/RedissonSessionManager.java @@ -0,0 +1,185 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.tomcat; + +import java.io.File; +import java.io.IOException; + +import org.apache.catalina.Context; +import org.apache.catalina.Lifecycle; +import org.apache.catalina.LifecycleException; +import org.apache.catalina.LifecycleListener; +import org.apache.catalina.LifecycleState; +import org.apache.catalina.Session; +import org.apache.catalina.session.ManagerBase; +import org.apache.catalina.util.LifecycleSupport; +import org.apache.juli.logging.Log; +import org.apache.juli.logging.LogFactory; +import org.redisson.Redisson; +import org.redisson.api.RMap; +import org.redisson.api.RedissonClient; +import org.redisson.config.Config; + +/** + * Redisson Session Manager for Apache Tomcat + * + * @author Nikita Koksharov + * + */ +public class RedissonSessionManager extends ManagerBase implements Lifecycle { + + private final Log log = LogFactory.getLog(RedissonSessionManager.class); + + protected LifecycleSupport lifecycle = new LifecycleSupport(this); + + private RedissonClient redisson; + private String configPath; + + public void setConfigPath(String configPath) { + this.configPath = configPath; + } + + public String getConfigPath() { + return configPath; + } + + @Override + public String getName() { + return RedissonSessionManager.class.getSimpleName(); + } + + @Override + public int getRejectedSessions() { + return 0; + } + + @Override + public void load() throws ClassNotFoundException, IOException { + } + + @Override + public void unload() throws IOException { + } + + @Override + public void addLifecycleListener(LifecycleListener listener) { + lifecycle.addLifecycleListener(listener); + } + + @Override + public LifecycleListener[] findLifecycleListeners() { + return lifecycle.findLifecycleListeners(); + } + + @Override + public void removeLifecycleListener(LifecycleListener listener) { + lifecycle.removeLifecycleListener(listener); + } + + @Override + public Session createSession(String sessionId) { + RedissonSession session = (RedissonSession) createEmptySession(); + + session.setNew(true); + session.setValid(true); + session.setCreationTime(System.currentTimeMillis()); + session.setMaxInactiveInterval(((Context) getContainer()).getSessionTimeout() * 60); + + if (sessionId == null) { + sessionId = generateSessionId(); + } + + session.setId(sessionId); + session.save(); + + return session; + } + + public RMap getMap(String sessionId) { + return redisson.getMap("redisson_tomcat_session:" + sessionId); + } + + @Override + public Session findSession(String id) throws IOException { + Session result = super.findSession(id); + if (result == null && id != null) { + RedissonSession session = (RedissonSession) createEmptySession(); + session.setId(id); + session.load(); + return session; + } + + return result; + } + + @Override + public Session createEmptySession() { + return new RedissonSession(this); + } + + @Override + public void remove(Session session) { + super.remove(session); + + getMap(session.getId()).delete(); + } + + public RedissonClient getRedisson() { + return redisson; + } + + @Override + protected void startInternal() throws LifecycleException { + super.startInternal(); + Config config = null; + try { + config = Config.fromJSON(new File(configPath)); + } catch (IOException e) { + // trying next format + try { + config = Config.fromYAML(new File(configPath)); + } catch (IOException e1) { + log.error("Can't parse json config " + configPath, e); + throw new LifecycleException("Can't parse yaml config " + configPath, e1); + } + } + + try { + redisson = Redisson.create(config); + } catch (Exception e) { + throw new LifecycleException(e); + } + + setState(LifecycleState.STARTING); + } + + @Override + protected void stopInternal() throws LifecycleException { + super.stopInternal(); + + setState(LifecycleState.STOPPING); + + try { + if (redisson != null) { + redisson.shutdown(); + } + } catch (Exception e) { + throw new LifecycleException(e); + } + + } + +} diff --git a/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java b/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java new file mode 100644 index 000000000..2d260c587 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/RedissonSessionManagerTest.java @@ -0,0 +1,152 @@ +package org.redisson.tomcat; + +import java.io.IOException; + +import org.apache.http.client.ClientProtocolException; +import org.apache.http.client.fluent.Executor; +import org.apache.http.client.fluent.Request; +import org.apache.http.cookie.Cookie; +import org.apache.http.impl.client.BasicCookieStore; +import org.junit.Assert; +import org.junit.Test; + +public class RedissonSessionManagerTest { + + @Test + public void testSwitchServer() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + + Executor.closeIdleConnections(); + server.stop(); + + server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testWriteReadRemove() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1234"); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testRecreate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + recreate(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testUpdate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + read(executor, "test", "1"); + write(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testInvalidate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + invalidate(executor); + + Executor.closeIdleConnections(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + private void write(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/write?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void read(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/read?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void remove(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/remove?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void invalidate(Executor executor) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/invalidate"; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void recreate(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/recreate?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + +} diff --git a/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/TestServlet.java b/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/TestServlet.java new file mode 100644 index 000000000..1c68da965 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/TestServlet.java @@ -0,0 +1,94 @@ +package org.redisson.tomcat; + +import java.io.IOException; + +import javax.servlet.ServletException; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import javax.servlet.http.HttpSession; + +public class TestServlet extends HttpServlet { + + private static final long serialVersionUID = 1243830648280853203L; + + @Override + protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { + HttpSession session = req.getSession(); + + if (req.getPathInfo().equals("/write")) { + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/read")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + Object attr = session.getAttribute(key); + resp.getWriter().print(attr); + } else if (req.getPathInfo().equals("/remove")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + session.removeAttribute(key); + resp.getWriter().print(String.valueOf(session.getAttribute(key))); + } else if (req.getPathInfo().equals("/invalidate")) { + session.invalidate(); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/recreate")) { + session.invalidate(); + + session = req.getSession(); + + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } + } + +} diff --git a/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/TomcatServer.java b/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/TomcatServer.java new file mode 100644 index 000000000..6ec7af3b6 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/test/java/org/redisson/tomcat/TomcatServer.java @@ -0,0 +1,65 @@ +package org.redisson.tomcat; + +import java.net.MalformedURLException; + +import javax.servlet.ServletException; + +import org.apache.catalina.LifecycleException; +import org.apache.catalina.startup.Tomcat; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class TomcatServer { + + private Tomcat tomcat = new Tomcat(); + private int port; + private boolean isRunning; + + private static final Logger LOG = LoggerFactory.getLogger(TomcatServer.class); + private static final boolean isInfo = LOG.isInfoEnabled(); + + public TomcatServer(String contextPath, int port, String appBase) throws MalformedURLException, ServletException { + if(contextPath == null || appBase == null || appBase.length() == 0) { + throw new IllegalArgumentException("Context path or appbase should not be null"); + } + if(!contextPath.startsWith("/")) { + contextPath = "/" + contextPath; + } + + tomcat.setBaseDir("."); // location where temp dir is created + tomcat.setPort(port); + tomcat.getHost().setAppBase("."); + + tomcat.addWebapp(contextPath, appBase + "webapp"); + } + + /** + * Start the tomcat embedded server + */ + public void start() throws LifecycleException { + tomcat.start(); + isRunning = true; + } + + /** + * Stop the tomcat embedded server + */ + public void stop() throws LifecycleException { + if(!isRunning) { + LOG.warn("Tomcat server is not running @ port={}", port); + return; + } + + if(isInfo) LOG.info("Stopping the Tomcat server"); + + tomcat.stop(); + tomcat.destroy(); + tomcat.getServer().await(); + isRunning = false; + } + + public boolean isRunning() { + return isRunning; + } + +} \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-8/src/test/webapp/META-INF/context.xml b/redisson-tomcat/redisson-tomcat-8/src/test/webapp/META-INF/context.xml new file mode 100644 index 000000000..52b0eafd8 --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/test/webapp/META-INF/context.xml @@ -0,0 +1,7 @@ + + + + + + \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-8/src/test/webapp/WEB-INF/redisson.yaml b/redisson-tomcat/redisson-tomcat-8/src/test/webapp/WEB-INF/redisson.yaml new file mode 100644 index 000000000..28466f3ba --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/test/webapp/WEB-INF/redisson.yaml @@ -0,0 +1,2 @@ +singleServerConfig: + address: "redis://127.0.0.1:6379" \ No newline at end of file diff --git a/redisson-tomcat/redisson-tomcat-8/src/test/webapp/WEB-INF/web.xml b/redisson-tomcat/redisson-tomcat-8/src/test/webapp/WEB-INF/web.xml new file mode 100644 index 000000000..5940ccb8a --- /dev/null +++ b/redisson-tomcat/redisson-tomcat-8/src/test/webapp/WEB-INF/web.xml @@ -0,0 +1,22 @@ + + + + + testServlet + org.redisson.tomcat.TestServlet + 1 + + + + testServlet + /* + + + + 30 + + + diff --git a/redisson/header.txt b/redisson/header.txt deleted file mode 100644 index ac956a4f9..000000000 --- a/redisson/header.txt +++ /dev/null @@ -1,13 +0,0 @@ -Copyright 2016 Nikita Koksharov - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. diff --git a/redisson/pom.xml b/redisson/pom.xml index 9ac0cc7a0..32222c03e 100644 --- a/redisson/pom.xml +++ b/redisson/pom.xml @@ -4,7 +4,7 @@ org.redisson redisson-parent - 2.5.0-SNAPSHOT + 2.8.1-SNAPSHOT ../ @@ -21,13 +21,6 @@ http://redisson.org/ - - true - 1.6 - 1.8 - UTF-8 - - unit-test @@ -41,35 +34,40 @@ io.netty netty-transport-native-epoll - 4.0.41.Final + 4.1.8.Final provided io.netty netty-common - 4.0.41.Final + 4.1.8.Final io.netty netty-codec - 4.0.41.Final + 4.1.8.Final io.netty netty-buffer - 4.0.41.Final + 4.1.8.Final io.netty netty-transport - 4.0.41.Final + 4.1.8.Final io.netty netty-handler - 4.0.41.Final + 4.1.8.Final + + javax.cache + cache-api + 1.0.0 + io.projectreactor reactor-stream @@ -107,6 +105,43 @@ test + + org.apache.tomcat.embed + tomcat-embed-core + 7.0.73 + test + + + org.apache.tomcat.embed + tomcat-embed-logging-juli + 7.0.73 + test + + + org.apache.tomcat.embed + tomcat-embed-jasper + 7.0.73 + test + + + org.apache.tomcat + tomcat-jasper + 7.0.73 + test + + + org.apache.httpcomponents + fluent-hc + 4.5.2 + test + + + org.springframework + spring-web + [3.1,) + test + + net.jpountz.lz4 lz4 @@ -203,7 +238,12 @@ [3.1,) provided - + + org.springframework.session + spring-session + 1.2.2.RELEASE + provided + @@ -359,7 +399,7 @@ org.apache.felix maven-bundle-plugin - 3.0.1 + 3.2.0 true @@ -375,7 +415,7 @@ 2.11 ${basedir} -
${basedir}/header.txt
+
${basedir}/../header.txt
false true false diff --git a/redisson/src/main/java/org/redisson/BaseRemoteService.java b/redisson/src/main/java/org/redisson/BaseRemoteService.java index 565de722d..31695cec9 100644 --- a/redisson/src/main/java/org/redisson/BaseRemoteService.java +++ b/redisson/src/main/java/org/redisson/BaseRemoteService.java @@ -20,7 +20,9 @@ import java.lang.annotation.Annotation; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.lang.reflect.Proxy; +import java.util.ArrayList; import java.util.Arrays; +import java.util.List; import java.util.concurrent.TimeUnit; import org.redisson.api.RBlockingQueue; @@ -184,7 +186,7 @@ public abstract class BaseRemoteService { final RBlockingQueue requestQueue = redisson.getBlockingQueue(requestQueueName, getCodec()); - final RemoteServiceRequest request = new RemoteServiceRequest(requestId, method.getName(), args, + final RemoteServiceRequest request = new RemoteServiceRequest(requestId, method.getName(), getMethodSignatures(method), args, optionsCopy, System.currentTimeMillis()); final RemotePromise result = new RemotePromise(commandExecutor.getConnectionManager().newPromise()) { @@ -243,7 +245,7 @@ public abstract class BaseRemoteService { String canceRequestName = getCancelRequestQueueName(remoteInterface, requestId); cancelExecution(optionsCopy, responseName, request, mayInterruptIfRunning, canceRequestName, this); - awaitUninterruptibly(); + awaitUninterruptibly(60, TimeUnit.SECONDS); return isCancelled(); } }; @@ -399,7 +401,7 @@ public abstract class BaseRemoteService { String requestQueueName = getRequestQueueName(remoteInterface); RBlockingQueue requestQueue = redisson.getBlockingQueue(requestQueueName, getCodec()); - RemoteServiceRequest request = new RemoteServiceRequest(requestId, method.getName(), args, optionsCopy, + RemoteServiceRequest request = new RemoteServiceRequest(requestId, method.getName(), getMethodSignatures(method), args, optionsCopy, System.currentTimeMillis()); requestQueue.add(request); @@ -537,4 +539,11 @@ public abstract class BaseRemoteService { } } + protected List getMethodSignatures(Method method) { + List list = new ArrayList(); + for (Class t : method.getParameterTypes()) { + list.add(t.getName()); + } + return list; + } } diff --git a/redisson/src/main/java/org/redisson/EvictionScheduler.java b/redisson/src/main/java/org/redisson/EvictionScheduler.java deleted file mode 100644 index 66ba67897..000000000 --- a/redisson/src/main/java/org/redisson/EvictionScheduler.java +++ /dev/null @@ -1,246 +0,0 @@ -/** - * Copyright 2016 Nikita Koksharov - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.redisson; - -import java.util.Arrays; -import java.util.Deque; -import java.util.LinkedList; -import java.util.concurrent.ConcurrentMap; -import java.util.concurrent.TimeUnit; - -import org.redisson.api.RFuture; -import org.redisson.client.codec.LongCodec; -import org.redisson.client.protocol.RedisCommands; -import org.redisson.command.CommandAsyncExecutor; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import io.netty.util.concurrent.Future; -import io.netty.util.concurrent.FutureListener; -import io.netty.util.internal.PlatformDependent; - -/** - * Eviction scheduler for RMapCache object. - * Deletes expired entries in time interval between 5 seconds to 2 hours. - * It analyzes deleted amount of expired keys - * and 'tune' next execution delay depending on it. - * - * @author Nikita Koksharov - * - */ -public class EvictionScheduler { - - private static final Logger log = LoggerFactory.getLogger(EvictionScheduler.class); - - public class RedissonCacheTask implements Runnable { - - final String name; - final String timeoutSetName; - final String maxIdleSetName; - final boolean multimap; - final Deque sizeHistory = new LinkedList(); - int delay = 10; - - final int minDelay = 1; - final int maxDelay = 2*60*60; - final int keysLimit = 300; - - public RedissonCacheTask(String name, String timeoutSetName, String maxIdleSetName, boolean multimap) { - this.name = name; - this.timeoutSetName = timeoutSetName; - this.maxIdleSetName = maxIdleSetName; - this.multimap = multimap; - } - - public void schedule() { - executor.getConnectionManager().getGroup().schedule(this, delay, TimeUnit.SECONDS); - } - - @Override - public void run() { - RFuture future = cleanupExpiredEntires(name, timeoutSetName, maxIdleSetName, keysLimit, multimap); - - future.addListener(new FutureListener() { - @Override - public void operationComplete(Future future) throws Exception { - if (!future.isSuccess()) { - schedule(); - return; - } - - Integer size = future.getNow(); - - if (sizeHistory.size() == 2) { - if (sizeHistory.peekFirst() > sizeHistory.peekLast() - && sizeHistory.peekLast() > size) { - delay = Math.min(maxDelay, (int)(delay*1.5)); - } - -// if (sizeHistory.peekFirst() < sizeHistory.peekLast() -// && sizeHistory.peekLast() < size) { -// prevDelay = Math.max(minDelay, prevDelay/2); -// } - - if (sizeHistory.peekFirst().intValue() == sizeHistory.peekLast() - && sizeHistory.peekLast().intValue() == size) { - if (size == keysLimit) { - delay = Math.max(minDelay, delay/4); - } - if (size == 0) { - delay = Math.min(maxDelay, (int)(delay*1.5)); - } - } - - sizeHistory.pollFirst(); - } - - sizeHistory.add(size); - schedule(); - } - }); - } - - } - - private final ConcurrentMap tasks = PlatformDependent.newConcurrentHashMap(); - private final CommandAsyncExecutor executor; - - private final ConcurrentMap lastExpiredTime = PlatformDependent.newConcurrentHashMap(); - private final int expireTaskExecutionDelay = 1000; - private final int valuesAmountToClean = 500; - - public EvictionScheduler(CommandAsyncExecutor executor) { - this.executor = executor; - } - - public void scheduleCleanMultimap(String name, String timeoutSetName) { - RedissonCacheTask task = new RedissonCacheTask(name, timeoutSetName, null, true); - RedissonCacheTask prevTask = tasks.putIfAbsent(name, task); - if (prevTask == null) { - task.schedule(); - } - } - - public void schedule(String name, String timeoutSetName) { - RedissonCacheTask task = new RedissonCacheTask(name, timeoutSetName, null, false); - RedissonCacheTask prevTask = tasks.putIfAbsent(name, task); - if (prevTask == null) { - task.schedule(); - } - } - - public void schedule(String name) { - schedule(name, null); - } - - public void schedule(String name, String timeoutSetName, String maxIdleSetName) { - RedissonCacheTask task = new RedissonCacheTask(name, timeoutSetName, maxIdleSetName, false); - RedissonCacheTask prevTask = tasks.putIfAbsent(name, task); - if (prevTask == null) { - task.schedule(); - } - } - - public void runCleanTask(final String name, String timeoutSetName, long currentDate) { - - final Long lastExpired = lastExpiredTime.get(name); - long now = System.currentTimeMillis(); - if (lastExpired == null) { - if (lastExpiredTime.putIfAbsent(name, now) != null) { - return; - } - } else if (lastExpired + expireTaskExecutionDelay >= now) { - if (!lastExpiredTime.replace(name, lastExpired, now)) { - return; - } - } else { - return; - } - - RFuture future = cleanupExpiredEntires(name, timeoutSetName, null, valuesAmountToClean, false); - - future.addListener(new FutureListener() { - @Override - public void operationComplete(Future future) throws Exception { - executor.getConnectionManager().getGroup().schedule(new Runnable() { - @Override - public void run() { - lastExpiredTime.remove(name, lastExpired); - } - }, expireTaskExecutionDelay*3, TimeUnit.SECONDS); - - if (!future.isSuccess()) { - log.warn("Can't execute clean task for expired values. RSetCache name: " + name, future.cause()); - return; - } - } - }); - } - - private RFuture cleanupExpiredEntires(String name, String timeoutSetName, String maxIdleSetName, int keysLimit, boolean multimap) { - if (multimap) { - return executor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_INTEGER, - "local expiredKeys = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " - + "if #expiredKeys > 0 then " - + "redis.call('zrem', KEYS[2], unpack(expiredKeys)); " - - + "local values = redis.call('hmget', KEYS[1], unpack(expiredKeys)); " - + "local keys = {}; " - + "for i, v in ipairs(values) do " - + "local name = '{' .. KEYS[1] .. '}:' .. v; " - + "table.insert(keys, name); " - + "end; " - + "redis.call('del', unpack(keys)); " - - + "redis.call('hdel', KEYS[1], unpack(expiredKeys)); " - + "end; " - + "return #expiredKeys;", - Arrays.asList(name, timeoutSetName), System.currentTimeMillis(), keysLimit); - } - - if (maxIdleSetName != null) { - return executor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_INTEGER, - "local expiredKeys1 = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " - + "if #expiredKeys1 > 0 then " - + "redis.call('zrem', KEYS[3], unpack(expiredKeys1)); " - + "redis.call('zrem', KEYS[2], unpack(expiredKeys1)); " - + "redis.call('hdel', KEYS[1], unpack(expiredKeys1)); " - + "end; " - + "local expiredKeys2 = redis.call('zrangebyscore', KEYS[3], 0, ARGV[1], 'limit', 0, ARGV[2]); " - + "if #expiredKeys2 > 0 then " - + "redis.call('zrem', KEYS[3], unpack(expiredKeys2)); " - + "redis.call('zrem', KEYS[2], unpack(expiredKeys2)); " - + "redis.call('hdel', KEYS[1], unpack(expiredKeys2)); " - + "end; " - + "return #expiredKeys1 + #expiredKeys2;", - Arrays.asList(name, timeoutSetName, maxIdleSetName), System.currentTimeMillis(), keysLimit); - } - - if (timeoutSetName == null) { - return executor.writeAsync(name, LongCodec.INSTANCE, RedisCommands.ZREMRANGEBYSCORE, name, 0, System.currentTimeMillis()); - } - - return executor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_INTEGER, - "local expiredKeys = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " - + "if #expiredKeys > 0 then " - + "redis.call('zrem', KEYS[2], unpack(expiredKeys)); " - + "redis.call('hdel', KEYS[1], unpack(expiredKeys)); " - + "end; " - + "return #expiredKeys;", - Arrays.asList(name, timeoutSetName), System.currentTimeMillis(), keysLimit); - } - -} diff --git a/redisson/src/main/java/org/redisson/PubSubMessageListener.java b/redisson/src/main/java/org/redisson/PubSubMessageListener.java index 412be84bf..9b1b0ff61 100644 --- a/redisson/src/main/java/org/redisson/PubSubMessageListener.java +++ b/redisson/src/main/java/org/redisson/PubSubMessageListener.java @@ -64,6 +64,10 @@ public class PubSubMessageListener implements RedisPubSubListener { return false; return true; } + + public MessageListener getListener() { + return listener; + } @Override public void onMessage(String channel, Object message) { diff --git a/redisson/src/main/java/org/redisson/PubSubPatternMessageListener.java b/redisson/src/main/java/org/redisson/PubSubPatternMessageListener.java index 56997aaed..1ce1372b9 100644 --- a/redisson/src/main/java/org/redisson/PubSubPatternMessageListener.java +++ b/redisson/src/main/java/org/redisson/PubSubPatternMessageListener.java @@ -65,6 +65,10 @@ public class PubSubPatternMessageListener implements RedisPubSubListener { return true; } + public PatternMessageListener getListener() { + return listener; + } + @Override public void onMessage(String channel, V message) { } diff --git a/redisson/src/main/java/org/redisson/QueueTransferService.java b/redisson/src/main/java/org/redisson/QueueTransferService.java new file mode 100644 index 000000000..c9a21f685 --- /dev/null +++ b/redisson/src/main/java/org/redisson/QueueTransferService.java @@ -0,0 +1,52 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.util.concurrent.ConcurrentMap; + +import io.netty.util.internal.PlatformDependent; + +/** + * + * @author Nikita Koksharov + * + */ +public class QueueTransferService { + + private final ConcurrentMap tasks = PlatformDependent.newConcurrentHashMap(); + + public synchronized void schedule(String name, QueueTransferTask task) { + QueueTransferTask oldTask = tasks.putIfAbsent(name, task); + if (oldTask == null) { + task.start(); + } else { + oldTask.incUsage(); + } + } + + public synchronized void remove(String name) { + QueueTransferTask task = tasks.get(name); + if (task != null) { + if (task.decUsage() == 0) { + tasks.remove(name, task); + task.stop(); + } + } + } + + + +} diff --git a/redisson/src/main/java/org/redisson/QueueTransferTask.java b/redisson/src/main/java/org/redisson/QueueTransferTask.java new file mode 100644 index 000000000..b09372193 --- /dev/null +++ b/redisson/src/main/java/org/redisson/QueueTransferTask.java @@ -0,0 +1,142 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; + +import org.redisson.api.RFuture; +import org.redisson.api.RTopic; +import org.redisson.api.listener.BaseStatusListener; +import org.redisson.api.listener.MessageListener; +import org.redisson.connection.ConnectionManager; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import io.netty.util.Timeout; +import io.netty.util.TimerTask; +import io.netty.util.concurrent.FutureListener; + +/** + * + * @author Nikita Koksharov + * + */ +public abstract class QueueTransferTask { + + private static final Logger log = LoggerFactory.getLogger(QueueTransferTask.class); + + private int usage = 1; + private final AtomicReference timeoutReference = new AtomicReference(); + private final ConnectionManager connectionManager; + + public QueueTransferTask(ConnectionManager connectionManager) { + super(); + this.connectionManager = connectionManager; + } + + public void incUsage() { + usage++; + } + + public int decUsage() { + usage--; + return usage; + } + + private int messageListenerId; + private int statusListenerId; + + public void start() { + RTopic schedulerTopic = getTopic(); + statusListenerId = schedulerTopic.addListener(new BaseStatusListener() { + @Override + public void onSubscribe(String channel) { + pushTask(); + } + }); + + messageListenerId = schedulerTopic.addListener(new MessageListener() { + @Override + public void onMessage(String channel, Long startTime) { + scheduleTask(startTime); + } + }); + } + + public void stop() { + RTopic schedulerTopic = getTopic(); + schedulerTopic.removeListener(messageListenerId); + schedulerTopic.removeListener(statusListenerId); + } + + private void scheduleTask(final Long startTime) { + if (startTime == null) { + return; + } + + Timeout oldTimeout = timeoutReference.get(); + if (oldTimeout != null) { + oldTimeout.cancel(); + timeoutReference.compareAndSet(oldTimeout, null); + } + + long delay = startTime - System.currentTimeMillis(); + if (delay > 10) { + Timeout timeout = connectionManager.newTimeout(new TimerTask() { + @Override + public void run(Timeout timeout) throws Exception { + pushTask(); + } + }, delay, TimeUnit.MILLISECONDS); + timeoutReference.set(timeout); + } else { + pushTask(); + } + } + + protected abstract RTopic getTopic(); + + protected abstract RFuture pushTaskAsync(); + + private void pushTask() { + RFuture startTimeFuture = pushTaskAsync(); + addListener(startTimeFuture); + } + + private void addListener(RFuture startTimeFuture) { + startTimeFuture.addListener(new FutureListener() { + @Override + public void operationComplete(io.netty.util.concurrent.Future future) throws Exception { + if (!future.isSuccess()) { + if (future.cause() instanceof RedissonShutdownException) { + return; + } + log.error(future.cause().getMessage(), future.cause()); + scheduleTask(System.currentTimeMillis() + 5 * 1000L); + return; + } + + if (future.getNow() != null) { + scheduleTask(future.getNow()); + } + } + }); + } + + +} diff --git a/redisson/src/main/java/org/redisson/RedisNodes.java b/redisson/src/main/java/org/redisson/RedisNodes.java index c953b66c6..0be49f734 100644 --- a/redisson/src/main/java/org/redisson/RedisNodes.java +++ b/redisson/src/main/java/org/redisson/RedisNodes.java @@ -15,6 +15,7 @@ */ package org.redisson; +import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Collection; import java.util.List; @@ -33,10 +34,17 @@ import org.redisson.connection.ConnectionListener; import org.redisson.connection.ConnectionManager; import org.redisson.connection.RedisClientEntry; import org.redisson.misc.RPromise; +import org.redisson.misc.URLBuilder; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; +/** + * + * @author Nikita Koksharov + * + * @param node type + */ public class RedisNodes implements NodesGroup { final ConnectionManager connectionManager; @@ -45,6 +53,18 @@ public class RedisNodes implements NodesGroup { this.connectionManager = connectionManager; } + @Override + public N getNode(String address) { + Collection clients = (Collection) connectionManager.getClients(); + InetSocketAddress addr = URLBuilder.toAddress(address); + for (N node : clients) { + if (node.getAddr().equals(addr)) { + return node; + } + } + return null; + } + @Override public Collection getNodes(NodeType type) { Collection clients = (Collection) connectionManager.getClients(); diff --git a/redisson/src/main/java/org/redisson/Redisson.java b/redisson/src/main/java/org/redisson/Redisson.java index 780f728f3..7594e69e7 100755 --- a/redisson/src/main/java/org/redisson/Redisson.java +++ b/redisson/src/main/java/org/redisson/Redisson.java @@ -26,14 +26,17 @@ import org.redisson.api.NodesGroup; import org.redisson.api.RAtomicDouble; import org.redisson.api.RAtomicLong; import org.redisson.api.RBatch; +import org.redisson.api.RBinaryStream; import org.redisson.api.RBitSet; import org.redisson.api.RBlockingDeque; +import org.redisson.api.RBlockingFairQueue; import org.redisson.api.RBlockingQueue; import org.redisson.api.RBloomFilter; import org.redisson.api.RBoundedBlockingQueue; import org.redisson.api.RBucket; import org.redisson.api.RBuckets; import org.redisson.api.RCountDownLatch; +import org.redisson.api.RDelayedQueue; import org.redisson.api.RDeque; import org.redisson.api.RGeo; import org.redisson.api.RHyperLogLog; @@ -48,6 +51,9 @@ import org.redisson.api.RLock; import org.redisson.api.RMap; import org.redisson.api.RMapCache; import org.redisson.api.RPatternTopic; +import org.redisson.api.RPermitExpirableSemaphore; +import org.redisson.api.RPriorityDeque; +import org.redisson.api.RPriorityQueue; import org.redisson.api.RQueue; import org.redisson.api.RReadWriteLock; import org.redisson.api.RRemoteService; @@ -55,7 +61,6 @@ import org.redisson.api.RScheduledExecutorService; import org.redisson.api.RScoredSortedSet; import org.redisson.api.RScript; import org.redisson.api.RSemaphore; -import org.redisson.api.RPermitExpirableSemaphore; import org.redisson.api.RSet; import org.redisson.api.RSetCache; import org.redisson.api.RSetMultimap; @@ -65,17 +70,17 @@ import org.redisson.api.RTopic; import org.redisson.api.RedissonClient; import org.redisson.api.RedissonReactiveClient; import org.redisson.client.codec.Codec; +import org.redisson.codec.CodecProvider; import org.redisson.command.CommandExecutor; -import org.redisson.command.CommandSyncService; import org.redisson.config.Config; import org.redisson.config.ConfigSupport; import org.redisson.connection.ConnectionManager; -import org.redisson.codec.CodecProvider; +import org.redisson.eviction.EvictionScheduler; import org.redisson.liveobject.provider.ResolverProvider; +import org.redisson.misc.RedissonObjectFactory; import org.redisson.pubsub.SemaphorePubSub; import io.netty.util.internal.PlatformDependent; -import org.redisson.misc.RedissonObjectFactory; /** * Main infrastructure class allows to get access @@ -91,8 +96,8 @@ public class Redisson implements RedissonClient { RedissonReference.warmUp(); } + protected final QueueTransferService queueTransferService = new QueueTransferService(); protected final EvictionScheduler evictionScheduler; - protected final CommandExecutor commandExecutor; protected final ConnectionManager connectionManager; protected final ConcurrentMap, Class> liveObjectClassCache = PlatformDependent.newConcurrentHashMap(); @@ -108,13 +113,20 @@ public class Redisson implements RedissonClient { Config configCopy = new Config(config); connectionManager = ConfigSupport.createConnectionManager(configCopy); - commandExecutor = new CommandSyncService(connectionManager); - evictionScheduler = new EvictionScheduler(commandExecutor); + evictionScheduler = new EvictionScheduler(connectionManager.getCommandExecutor()); codecProvider = config.getCodecProvider(); resolverProvider = config.getResolverProvider(); } - ConnectionManager getConnectionManager() { + public EvictionScheduler getEvictionScheduler() { + return evictionScheduler; + } + + public CommandExecutor getCommandExecutor() { + return connectionManager.getCommandExecutor(); + } + + public ConnectionManager getConnectionManager() { return connectionManager; } @@ -174,339 +186,362 @@ public class Redisson implements RedissonClient { return react; } + @Override + public RBinaryStream getBinaryStream(String name) { + return new RedissonBinaryStream(connectionManager.getCommandExecutor(), name); + } + @Override public RGeo getGeo(String name) { - return new RedissonGeo(commandExecutor, name); + return new RedissonGeo(connectionManager.getCommandExecutor(), name); } @Override public RGeo getGeo(String name, Codec codec) { - return new RedissonGeo(codec, commandExecutor, name); + return new RedissonGeo(codec, connectionManager.getCommandExecutor(), name); } @Override public RBucket getBucket(String name) { - return new RedissonBucket(commandExecutor, name); + return new RedissonBucket(connectionManager.getCommandExecutor(), name); } @Override public RBucket getBucket(String name, Codec codec) { - return new RedissonBucket(codec, commandExecutor, name); + return new RedissonBucket(codec, connectionManager.getCommandExecutor(), name); } @Override public RBuckets getBuckets() { - return new RedissonBuckets(this, commandExecutor); + return new RedissonBuckets(this, connectionManager.getCommandExecutor()); } @Override public RBuckets getBuckets(Codec codec) { - return new RedissonBuckets(this, codec, commandExecutor); + return new RedissonBuckets(this, codec, connectionManager.getCommandExecutor()); } @Override public RHyperLogLog getHyperLogLog(String name) { - return new RedissonHyperLogLog(commandExecutor, name); + return new RedissonHyperLogLog(connectionManager.getCommandExecutor(), name); } @Override public RHyperLogLog getHyperLogLog(String name, Codec codec) { - return new RedissonHyperLogLog(codec, commandExecutor, name); + return new RedissonHyperLogLog(codec, connectionManager.getCommandExecutor(), name); } @Override public RList getList(String name) { - return new RedissonList(commandExecutor, name); + return new RedissonList(connectionManager.getCommandExecutor(), name); } @Override public RList getList(String name, Codec codec) { - return new RedissonList(codec, commandExecutor, name); + return new RedissonList(codec, connectionManager.getCommandExecutor(), name); } @Override public RListMultimap getListMultimap(String name) { - return new RedissonListMultimap(commandExecutor, name); + return new RedissonListMultimap(id, connectionManager.getCommandExecutor(), name); } @Override public RListMultimap getListMultimap(String name, Codec codec) { - return new RedissonListMultimap(codec, commandExecutor, name); + return new RedissonListMultimap(id, codec, connectionManager.getCommandExecutor(), name); } @Override public RLocalCachedMap getLocalCachedMap(String name, LocalCachedMapOptions options) { - return new RedissonLocalCachedMap(this, commandExecutor, name, options); + return new RedissonLocalCachedMap(id, connectionManager.getCommandExecutor(), name, options); } @Override public RLocalCachedMap getLocalCachedMap(String name, Codec codec, LocalCachedMapOptions options) { - return new RedissonLocalCachedMap(this, codec, commandExecutor, name, options); + return new RedissonLocalCachedMap(id, codec, connectionManager.getCommandExecutor(), name, options); } @Override public RMap getMap(String name) { - return new RedissonMap(commandExecutor, name); + return new RedissonMap(id, connectionManager.getCommandExecutor(), name); } @Override public RSetMultimap getSetMultimap(String name) { - return new RedissonSetMultimap(commandExecutor, name); + return new RedissonSetMultimap(id, connectionManager.getCommandExecutor(), name); } @Override public RSetMultimapCache getSetMultimapCache(String name) { - return new RedissonSetMultimapCache(evictionScheduler, commandExecutor, name); + return new RedissonSetMultimapCache(id, evictionScheduler, connectionManager.getCommandExecutor(), name); } @Override public RSetMultimapCache getSetMultimapCache(String name, Codec codec) { - return new RedissonSetMultimapCache(evictionScheduler, codec, commandExecutor, name); + return new RedissonSetMultimapCache(id, evictionScheduler, codec, connectionManager.getCommandExecutor(), name); } @Override public RListMultimapCache getListMultimapCache(String name) { - return new RedissonListMultimapCache(evictionScheduler, commandExecutor, name); + return new RedissonListMultimapCache(id, evictionScheduler, connectionManager.getCommandExecutor(), name); } @Override public RListMultimapCache getListMultimapCache(String name, Codec codec) { - return new RedissonListMultimapCache(evictionScheduler, codec, commandExecutor, name); + return new RedissonListMultimapCache(id, evictionScheduler, codec, connectionManager.getCommandExecutor(), name); } @Override public RSetMultimap getSetMultimap(String name, Codec codec) { - return new RedissonSetMultimap(codec, commandExecutor, name); + return new RedissonSetMultimap(id, codec, connectionManager.getCommandExecutor(), name); } @Override public RSetCache getSetCache(String name) { - return new RedissonSetCache(evictionScheduler, commandExecutor, name); + return new RedissonSetCache(evictionScheduler, connectionManager.getCommandExecutor(), name); } @Override public RSetCache getSetCache(String name, Codec codec) { - return new RedissonSetCache(codec, evictionScheduler, commandExecutor, name); + return new RedissonSetCache(codec, evictionScheduler, connectionManager.getCommandExecutor(), name); } @Override public RMapCache getMapCache(String name) { - return new RedissonMapCache(evictionScheduler, commandExecutor, name); + return new RedissonMapCache(id, evictionScheduler, connectionManager.getCommandExecutor(), name); } @Override public RMapCache getMapCache(String name, Codec codec) { - return new RedissonMapCache(codec, evictionScheduler, commandExecutor, name); + return new RedissonMapCache(id, codec, evictionScheduler, connectionManager.getCommandExecutor(), name); } @Override public RMap getMap(String name, Codec codec) { - return new RedissonMap(codec, commandExecutor, name); + return new RedissonMap(id, codec, connectionManager.getCommandExecutor(), name); } @Override public RLock getLock(String name) { - return new RedissonLock(commandExecutor, name, id); + return new RedissonLock(connectionManager.getCommandExecutor(), name, id); } @Override public RLock getFairLock(String name) { - return new RedissonFairLock(commandExecutor, name, id); + return new RedissonFairLock(connectionManager.getCommandExecutor(), name, id); } @Override public RReadWriteLock getReadWriteLock(String name) { - return new RedissonReadWriteLock(commandExecutor, name, id); + return new RedissonReadWriteLock(connectionManager.getCommandExecutor(), name, id); } @Override public RSet getSet(String name) { - return new RedissonSet(commandExecutor, name); + return new RedissonSet(connectionManager.getCommandExecutor(), name); } @Override public RSet getSet(String name, Codec codec) { - return new RedissonSet(codec, commandExecutor, name); + return new RedissonSet(codec, connectionManager.getCommandExecutor(), name); } @Override public RScript getScript() { - return new RedissonScript(commandExecutor); + return new RedissonScript(connectionManager.getCommandExecutor()); } @Override public RScheduledExecutorService getExecutorService(String name) { - return new RedissonExecutorService(connectionManager.getCodec(), commandExecutor, this, name); + return new RedissonExecutorService(connectionManager.getCodec(), connectionManager.getCommandExecutor(), this, name); } @Override public RScheduledExecutorService getExecutorService(Codec codec, String name) { - return new RedissonExecutorService(codec, commandExecutor, this, name); + return new RedissonExecutorService(codec, connectionManager.getCommandExecutor(), this, name); } @Override public RRemoteService getRemoteService() { - return new RedissonRemoteService(this, commandExecutor); + return new RedissonRemoteService(this, connectionManager.getCommandExecutor()); } @Override public RRemoteService getRemoteService(String name) { - return new RedissonRemoteService(this, name, commandExecutor); + return new RedissonRemoteService(this, name, connectionManager.getCommandExecutor()); } @Override public RRemoteService getRemoteService(Codec codec) { - return new RedissonRemoteService(codec, this, commandExecutor); + return new RedissonRemoteService(codec, this, connectionManager.getCommandExecutor()); } @Override public RRemoteService getRemoteService(String name, Codec codec) { - return new RedissonRemoteService(codec, this, name, commandExecutor); + return new RedissonRemoteService(codec, this, name, connectionManager.getCommandExecutor()); } @Override public RSortedSet getSortedSet(String name) { - return new RedissonSortedSet(commandExecutor, name, this); + return new RedissonSortedSet(connectionManager.getCommandExecutor(), name, this); } @Override public RSortedSet getSortedSet(String name, Codec codec) { - return new RedissonSortedSet(codec, commandExecutor, name, this); + return new RedissonSortedSet(codec, connectionManager.getCommandExecutor(), name, this); } @Override public RScoredSortedSet getScoredSortedSet(String name) { - return new RedissonScoredSortedSet(commandExecutor, name); + return new RedissonScoredSortedSet(connectionManager.getCommandExecutor(), name); } @Override public RScoredSortedSet getScoredSortedSet(String name, Codec codec) { - return new RedissonScoredSortedSet(codec, commandExecutor, name); + return new RedissonScoredSortedSet(codec, connectionManager.getCommandExecutor(), name); } @Override public RLexSortedSet getLexSortedSet(String name) { - return new RedissonLexSortedSet(commandExecutor, name); + return new RedissonLexSortedSet(connectionManager.getCommandExecutor(), name); } @Override public RTopic getTopic(String name) { - return new RedissonTopic(commandExecutor, name); + return new RedissonTopic(connectionManager.getCommandExecutor(), name); } @Override public RTopic getTopic(String name, Codec codec) { - return new RedissonTopic(codec, commandExecutor, name); + return new RedissonTopic(codec, connectionManager.getCommandExecutor(), name); } @Override public RPatternTopic getPatternTopic(String pattern) { - return new RedissonPatternTopic(commandExecutor, pattern); + return new RedissonPatternTopic(connectionManager.getCommandExecutor(), pattern); } @Override public RPatternTopic getPatternTopic(String pattern, Codec codec) { - return new RedissonPatternTopic(codec, commandExecutor, pattern); + return new RedissonPatternTopic(codec, connectionManager.getCommandExecutor(), pattern); } + @Override + public RBlockingFairQueue getBlockingFairQueue(String name) { + return new RedissonBlockingFairQueue(connectionManager.getCommandExecutor(), name, semaphorePubSub, id); + } + + @Override + public RBlockingFairQueue getBlockingFairQueue(String name, Codec codec) { + return new RedissonBlockingFairQueue(codec, connectionManager.getCommandExecutor(), name, semaphorePubSub, id); + } + + @Override + public RDelayedQueue getDelayedQueue(RQueue destinationQueue) { + if (destinationQueue == null) { + throw new NullPointerException(); + } + return new RedissonDelayedQueue(queueTransferService, destinationQueue.getCodec(), connectionManager.getCommandExecutor(), destinationQueue.getName()); + } + @Override public RQueue getQueue(String name) { - return new RedissonQueue(commandExecutor, name); + return new RedissonQueue(connectionManager.getCommandExecutor(), name); } @Override public RQueue getQueue(String name, Codec codec) { - return new RedissonQueue(codec, commandExecutor, name); + return new RedissonQueue(codec, connectionManager.getCommandExecutor(), name); } @Override public RBlockingQueue getBlockingQueue(String name) { - return new RedissonBlockingQueue(commandExecutor, name); + return new RedissonBlockingQueue(connectionManager.getCommandExecutor(), name); } @Override public RBlockingQueue getBlockingQueue(String name, Codec codec) { - return new RedissonBlockingQueue(codec, commandExecutor, name); + return new RedissonBlockingQueue(codec, connectionManager.getCommandExecutor(), name); } @Override public RBoundedBlockingQueue getBoundedBlockingQueue(String name) { - return new RedissonBoundedBlockingQueue(semaphorePubSub, commandExecutor, name); + return new RedissonBoundedBlockingQueue(semaphorePubSub, connectionManager.getCommandExecutor(), name); } @Override public RBoundedBlockingQueue getBoundedBlockingQueue(String name, Codec codec) { - return new RedissonBoundedBlockingQueue(semaphorePubSub, codec, commandExecutor, name); + return new RedissonBoundedBlockingQueue(semaphorePubSub, codec, connectionManager.getCommandExecutor(), name); } @Override public RDeque getDeque(String name) { - return new RedissonDeque(commandExecutor, name); + return new RedissonDeque(connectionManager.getCommandExecutor(), name); } @Override public RDeque getDeque(String name, Codec codec) { - return new RedissonDeque(codec, commandExecutor, name); + return new RedissonDeque(codec, connectionManager.getCommandExecutor(), name); } @Override public RBlockingDeque getBlockingDeque(String name) { - return new RedissonBlockingDeque(commandExecutor, name); + return new RedissonBlockingDeque(connectionManager.getCommandExecutor(), name); } @Override public RBlockingDeque getBlockingDeque(String name, Codec codec) { - return new RedissonBlockingDeque(codec, commandExecutor, name); + return new RedissonBlockingDeque(codec, connectionManager.getCommandExecutor(), name); }; @Override public RAtomicLong getAtomicLong(String name) { - return new RedissonAtomicLong(commandExecutor, name); + return new RedissonAtomicLong(connectionManager.getCommandExecutor(), name); } @Override public RAtomicDouble getAtomicDouble(String name) { - return new RedissonAtomicDouble(commandExecutor, name); + return new RedissonAtomicDouble(connectionManager.getCommandExecutor(), name); } @Override public RCountDownLatch getCountDownLatch(String name) { - return new RedissonCountDownLatch(commandExecutor, name, id); + return new RedissonCountDownLatch(connectionManager.getCommandExecutor(), name, id); } @Override public RBitSet getBitSet(String name) { - return new RedissonBitSet(commandExecutor, name); + return new RedissonBitSet(connectionManager.getCommandExecutor(), name); } @Override public RSemaphore getSemaphore(String name) { - return new RedissonSemaphore(commandExecutor, name, semaphorePubSub); + return new RedissonSemaphore(connectionManager.getCommandExecutor(), name, semaphorePubSub); } public RPermitExpirableSemaphore getPermitExpirableSemaphore(String name) { - return new RedissonPermitExpirableSemaphore(commandExecutor, name, semaphorePubSub); + return new RedissonPermitExpirableSemaphore(connectionManager.getCommandExecutor(), name, semaphorePubSub); } @Override public RBloomFilter getBloomFilter(String name) { - return new RedissonBloomFilter(commandExecutor, name); + return new RedissonBloomFilter(connectionManager.getCommandExecutor(), name); } @Override public RBloomFilter getBloomFilter(String name, Codec codec) { - return new RedissonBloomFilter(codec, commandExecutor, name); + return new RedissonBloomFilter(codec, connectionManager.getCommandExecutor(), name); } @Override public RKeys getKeys() { - return new RedissonKeys(commandExecutor); + return new RedissonKeys(connectionManager.getCommandExecutor()); } @Override public RBatch createBatch() { - RedissonBatch batch = new RedissonBatch(evictionScheduler, connectionManager); + RedissonBatch batch = new RedissonBatch(id, evictionScheduler, connectionManager); if (config.isRedissonReferenceEnabled()) { batch.enableRedissonReferenceSupport(this); } @@ -568,8 +603,29 @@ public class Redisson implements RedissonClient { } protected void enableRedissonReferenceSupport() { - this.commandExecutor.enableRedissonReferenceSupport(this); + this.connectionManager.getCommandExecutor().enableRedissonReferenceSupport(this); + } + + @Override + public RPriorityQueue getPriorityQueue(String name) { + return new RedissonPriorityQueue(connectionManager.getCommandExecutor(), name, this); + } + + @Override + public RPriorityQueue getPriorityQueue(String name, Codec codec) { + return new RedissonPriorityQueue(codec, connectionManager.getCommandExecutor(), name, this); + } + + @Override + public RPriorityDeque getPriorityDeque(String name) { + return new RedissonPriorityDeque(connectionManager.getCommandExecutor(), name, this); } + @Override + public RPriorityDeque getPriorityDeque(String name, Codec codec) { + return new RedissonPriorityDeque(codec, connectionManager.getCommandExecutor(), name, this); + } + + } diff --git a/redisson/src/main/java/org/redisson/RedissonBaseIterator.java b/redisson/src/main/java/org/redisson/RedissonBaseIterator.java index 0f85a46d9..fc9151f8d 100644 --- a/redisson/src/main/java/org/redisson/RedissonBaseIterator.java +++ b/redisson/src/main/java/org/redisson/RedissonBaseIterator.java @@ -22,12 +22,15 @@ import java.util.List; import java.util.NoSuchElementException; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; + +import io.netty.buffer.ByteBuf; abstract class RedissonBaseIterator implements Iterator { - private List firstValues; - private List lastValues; - private Iterator lastIter; + private List firstValues; + private List lastValues; + private Iterator lastIter; protected long nextIterPos; protected InetSocketAddress client; @@ -40,6 +43,8 @@ abstract class RedissonBaseIterator implements Iterator { public boolean hasNext() { if (lastIter == null || !lastIter.hasNext()) { if (finished) { + free(firstValues); + free(lastValues); currentElementRemoved = false; removeExecuted = false; @@ -56,8 +61,12 @@ abstract class RedissonBaseIterator implements Iterator { long prevIterPos; do { prevIterPos = nextIterPos; - ListScanResult res = iterator(client, nextIterPos); - lastValues = new ArrayList(res.getValues()); + ListScanResult res = iterator(client, nextIterPos); + if (lastValues != null) { + free(lastValues); + } + + lastValues = convert(res.getValues()); client = res.getRedisClient(); if (nextIterPos == 0 && firstValues == null) { @@ -87,6 +96,9 @@ abstract class RedissonBaseIterator implements Iterator { } } } else if (lastValues.removeAll(firstValues)) { + free(firstValues); + free(lastValues); + currentElementRemoved = false; removeExecuted = false; client = null; @@ -111,11 +123,28 @@ abstract class RedissonBaseIterator implements Iterator { return lastIter.hasNext(); } + private List convert(List list) { + List result = new ArrayList(list.size()); + for (ScanObjectEntry entry : list) { + result.add(entry.getBuf()); + } + return result; + } + + private void free(List list) { + if (list == null) { + return; + } + for (ByteBuf byteBuf : list) { + byteBuf.release(); + } + } + protected boolean tryAgain() { return false; } - abstract ListScanResult iterator(InetSocketAddress client, long nextIterPos); + abstract ListScanResult iterator(InetSocketAddress client, long nextIterPos); @Override public V next() { @@ -123,7 +152,7 @@ abstract class RedissonBaseIterator implements Iterator { throw new NoSuchElementException("No such element"); } - value = lastIter.next(); + value = (V) lastIter.next().getObj(); currentElementRemoved = false; return value; } diff --git a/redisson/src/main/java/org/redisson/RedissonBaseMapIterator.java b/redisson/src/main/java/org/redisson/RedissonBaseMapIterator.java index c64db7ece..f2924e205 100644 --- a/redisson/src/main/java/org/redisson/RedissonBaseMapIterator.java +++ b/redisson/src/main/java/org/redisson/RedissonBaseMapIterator.java @@ -28,7 +28,7 @@ import org.redisson.client.protocol.decoder.ScanObjectEntry; import io.netty.buffer.ByteBuf; -abstract class RedissonBaseMapIterator implements Iterator { +public abstract class RedissonBaseMapIterator implements Iterator { private Map firstValues; private Map lastValues; @@ -151,7 +151,7 @@ abstract class RedissonBaseMapIterator implements Iterator { @Override public M next() { if (!hasNext()) { - throw new NoSuchElementException("No such element at index"); + throw new NoSuchElementException(); } entry = lastIter.next(); @@ -160,7 +160,7 @@ abstract class RedissonBaseMapIterator implements Iterator { } @SuppressWarnings("unchecked") - M getValue(final Entry entry) { + protected M getValue(final Entry entry) { return (M)new AbstractMap.SimpleEntry((K)entry.getKey().getObj(), (V)entry.getValue().getObj()) { @Override @@ -176,7 +176,7 @@ abstract class RedissonBaseMapIterator implements Iterator { if (currentElementRemoved) { throw new IllegalStateException("Element been already deleted"); } - if (lastIter == null) { + if (lastIter == null || entry == null) { throw new IllegalStateException(); } @@ -185,6 +185,7 @@ abstract class RedissonBaseMapIterator implements Iterator { removeKey(); currentElementRemoved = true; removeExecuted = true; + entry = null; } protected abstract void removeKey(); diff --git a/redisson/src/main/java/org/redisson/RedissonBatch.java b/redisson/src/main/java/org/redisson/RedissonBatch.java index fc7f9c774..a75a028ab 100644 --- a/redisson/src/main/java/org/redisson/RedissonBatch.java +++ b/redisson/src/main/java/org/redisson/RedissonBatch.java @@ -16,6 +16,7 @@ package org.redisson; import java.util.List; +import java.util.UUID; import org.redisson.api.RAtomicDoubleAsync; import org.redisson.api.RAtomicLongAsync; @@ -41,9 +42,11 @@ import org.redisson.api.RScriptAsync; import org.redisson.api.RSetAsync; import org.redisson.api.RSetCacheAsync; import org.redisson.api.RTopicAsync; +import org.redisson.api.RedissonClient; import org.redisson.client.codec.Codec; import org.redisson.command.CommandBatchService; import org.redisson.connection.ConnectionManager; +import org.redisson.eviction.EvictionScheduler; /** * @@ -55,10 +58,12 @@ public class RedissonBatch implements RBatch { private final EvictionScheduler evictionScheduler; private final CommandBatchService executorService; + private final UUID id; - protected RedissonBatch(EvictionScheduler evictionScheduler, ConnectionManager connectionManager) { + protected RedissonBatch(UUID id, EvictionScheduler evictionScheduler, ConnectionManager connectionManager) { this.executorService = new CommandBatchService(connectionManager); this.evictionScheduler = evictionScheduler; + this.id = id; } @Override @@ -93,12 +98,12 @@ public class RedissonBatch implements RBatch { @Override public RMapAsync getMap(String name) { - return new RedissonMap(executorService, name); + return new RedissonMap(id, executorService, name); } @Override public RMapAsync getMap(String name, Codec codec) { - return new RedissonMap(codec, executorService, name); + return new RedissonMap(id, codec, executorService, name); } @Override @@ -193,12 +198,12 @@ public class RedissonBatch implements RBatch { @Override public RMapCacheAsync getMapCache(String name, Codec codec) { - return new RedissonMapCache(codec, evictionScheduler, executorService, name); + return new RedissonMapCache(id, codec, evictionScheduler, executorService, name); } @Override public RMapCacheAsync getMapCache(String name) { - return new RedissonMapCache(evictionScheduler, executorService, name); + return new RedissonMapCache(id, evictionScheduler, executorService, name); } @Override @@ -243,22 +248,22 @@ public class RedissonBatch implements RBatch { @Override public RMultimapAsync getSetMultimap(String name) { - return new RedissonSetMultimap(executorService, name); + return new RedissonSetMultimap(id, executorService, name); } @Override public RMultimapAsync getSetMultimap(String name, Codec codec) { - return new RedissonSetMultimap(codec, executorService, name); + return new RedissonSetMultimap(id, codec, executorService, name); } @Override public RMultimapAsync getListMultimap(String name) { - return new RedissonListMultimap(executorService, name); + return new RedissonListMultimap(id, executorService, name); } @Override public RMultimapAsync getListMultimap(String name, Codec codec) { - return new RedissonListMultimap(codec, executorService, name); + return new RedissonListMultimap(id, codec, executorService, name); } @Override @@ -273,22 +278,22 @@ public class RedissonBatch implements RBatch { @Override public RMultimapCacheAsync getSetMultimapCache(String name) { - return new RedissonSetMultimapCache(evictionScheduler, executorService, name); + return new RedissonSetMultimapCache(id, evictionScheduler, executorService, name); } @Override public RMultimapCacheAsync getSetMultimapCache(String name, Codec codec) { - return new RedissonSetMultimapCache(evictionScheduler, codec, executorService, name); + return new RedissonSetMultimapCache(id, evictionScheduler, codec, executorService, name); } @Override public RMultimapCacheAsync getListMultimapCache(String name) { - return new RedissonListMultimapCache(evictionScheduler, executorService, name); + return new RedissonListMultimapCache(id, evictionScheduler, executorService, name); } @Override public RMultimapCacheAsync getListMultimapCache(String name, Codec codec) { - return new RedissonListMultimapCache(evictionScheduler, codec, executorService, name); + return new RedissonListMultimapCache(id, evictionScheduler, codec, executorService, name); } protected void enableRedissonReferenceSupport(Redisson redisson) { diff --git a/redisson/src/main/java/org/redisson/RedissonBinaryStream.java b/redisson/src/main/java/org/redisson/RedissonBinaryStream.java new file mode 100644 index 000000000..349a285db --- /dev/null +++ b/redisson/src/main/java/org/redisson/RedissonBinaryStream.java @@ -0,0 +1,303 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.Arrays; + +import org.redisson.api.RBinaryStream; +import org.redisson.api.RFuture; +import org.redisson.client.codec.ByteArrayCodec; +import org.redisson.client.handler.State; +import org.redisson.client.protocol.Decoder; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandAsyncExecutor; +import org.redisson.misc.RPromise; + +import io.netty.buffer.ByteBuf; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.FutureListener; + +/** + * + * @author Nikita Koksharov + * + */ +public class RedissonBinaryStream extends RedissonBucket implements RBinaryStream { + + class RedissonOutputStream extends OutputStream { + + @Override + public void write(int b) throws IOException { + writeBytes(new byte[] {(byte)b}); + } + + private void writeBytes(byte[] bytes) { + get(writeAsync(bytes)); + } + + @Override + public void write(byte[] b, int off, int len) throws IOException { + byte[] dest; + if (b.length == len && off == 0) { + dest = b; + } else { + dest = new byte[len]; + System.arraycopy(b, off, dest, 0, len); + } + writeBytes(dest); + } + + } + + class RedissonInputStream extends InputStream { + + private int index; + private int mark; + + @Override + public long skip(long n) throws IOException { + long k = size() - index; + if (n < k) { + k = n; + if (n < 0) { + k = 0; + } + } + + index += k; + return k; + } + + @Override + public void mark(int readlimit) { + mark = index; + } + + @Override + public void reset() throws IOException { + index = mark; + } + + @Override + public int available() throws IOException { + return (int)(size() - index); + } + + @Override + public boolean markSupported() { + return true; + } + + @Override + public int read() throws IOException { + byte[] b = new byte[1]; + int len = read(b); + if (len == -1) { + return -1; + } + return b[0] & 0xff; + } + + @Override + public int read(final byte[] b, final int off, final int len) throws IOException { + if (len == 0) { + return 0; + } + if (b == null) { + throw new NullPointerException(); + } + if (off < 0 || len < 0 || len > b.length - off) { + throw new IndexOutOfBoundsException(); + } + + return (Integer)get(commandExecutor.evalReadAsync(getName(), codec, new RedisCommand("EVAL", new Decoder() { + @Override + public Integer decode(ByteBuf buf, State state) { + if (buf.readableBytes() == 0) { + return -1; + } + int readBytes = Math.min(buf.readableBytes(), len); + buf.readBytes(b, off, readBytes); + index += readBytes; + return readBytes; + } + }), + "local parts = redis.call('get', KEYS[2]); " + + "if parts ~= false then " + + "local startPart = math.floor(tonumber(ARGV[1])/536870912); " + + "local endPart = math.floor(tonumber(ARGV[2])/536870912); " + + "local startPartName = KEYS[1]; " + + "local endPartName = KEYS[1]; " + + + "if startPart > 0 then " + + "startPartName = KEYS[1] .. ':' .. startPart; " + + "end; " + + "if endPart > 0 then " + + "endPartName = KEYS[1] .. ':' .. endPart; " + + "end; " + + + "if startPartName ~= endPartName then " + + "local startIndex = tonumber(ARGV[1]) - startPart*536870912; " + + "local endIndex = tonumber(ARGV[2]) - endPart*536870912; " + + "local result = redis.call('getrange', startPartName, startIndex, 536870911); " + + "result = result .. redis.call('getrange', endPartName, 0, endIndex-1); " + + "return result; " + + "end; " + + + "local startIndex = tonumber(ARGV[1]) - startPart*536870912; " + + "local endIndex = tonumber(ARGV[2]) - endPart*536870912; " + + "return redis.call('getrange', startPartName, startIndex, endIndex);" + + "end;" + + "return redis.call('getrange', KEYS[1], ARGV[1], ARGV[2]);", + Arrays.asList(getName(), getPartsName()), index, index + len - 1)); + } + + } + + protected RedissonBinaryStream(CommandAsyncExecutor connectionManager, String name) { + super(ByteArrayCodec.INSTANCE, connectionManager, name); + } + + @Override + public RFuture sizeAsync() { + return commandExecutor.evalReadAsync(getName(), codec, RedisCommands.EVAL_LONG, + "local parts = redis.call('get', KEYS[2]); " + + "local lastPartName = KEYS[1];" + + "if parts ~= false then " + + "lastPartName = KEYS[1] .. ':' .. (tonumber(parts)-1);" + + "local lastPartSize = redis.call('strlen', lastPartName);" + + "return ((tonumber(parts)-1) * 536870912) + lastPartSize;" + + "end;" + + "return redis.call('strlen', lastPartName);", + Arrays.asList(getName(), getPartsName())); + } + + private RFuture writeAsync(byte[] bytes) { + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_VOID, + "local parts = redis.call('get', KEYS[2]); " + + "local lastPartName = KEYS[1];" + + "if parts ~= false then " + + "lastPartName = KEYS[1] .. ':' .. (tonumber(parts)-1);" + + "end;" + + "local lastPartSize = redis.call('strlen', lastPartName);" + + "if lastPartSize == 0 then " + + "redis.call('append', lastPartName, ARGV[1]); " + + "return; " + + "end;" + + + "local chunkSize = 536870912 - lastPartSize; " + + "local arraySize = string.len(ARGV[1]); " + + "if chunkSize > 0 then " + + "if chunkSize >= arraySize then " + + "redis.call('append', lastPartName, ARGV[1]); " + + "return; " + + "else " + + "local chunk = string.sub(ARGV[1], 1, chunkSize);" + + "redis.call('append', lastPartName, chunk); " + + + "if parts == false then " + + "parts = 1;" + + "redis.call('incrby', KEYS[2], 2); " + + "else " + + "redis.call('incrby', KEYS[2], 1); " + + "end; " + + + "local newPartName = KEYS[1] .. ':' .. parts; " + + "chunk = string.sub(ARGV[1], -(arraySize - chunkSize));" + + "redis.call('append', newPartName, chunk); " + + "end; " + + "else " + + "if parts == false then " + + "parts = 1;" + + "redis.call('incrby', KEYS[2], 2); " + + "else " + + "redis.call('incrby', KEYS[2], 1); " + + "end; " + + + "local newPartName = KEYS[1] .. ':' .. parts; " + + "local chunk = string.sub(ARGV[1], -(arraySize - chunkSize));" + + "redis.call('append', newPartName, ARGV[1]); " + + "end; ", + Arrays.asList(getName(), getPartsName()), bytes); + } + + @Override + public InputStream getInputStream() { + return new RedissonInputStream(); + } + + @Override + public OutputStream getOutputStream() { + return new RedissonOutputStream(); + } + + @Override + public RFuture setAsync(byte[] value) { + if (value.length > 512*1024*1024) { + RPromise result = newPromise(); + int chunkSize = 10*1024*1024; + write(value, result, chunkSize, 0); + return result; + } + + return super.setAsync(value); + } + + private void write(final byte[] value, final RPromise result, final int chunkSize, final int i) { + final int len = Math.min(value.length - i*chunkSize, chunkSize); + byte[] bytes = Arrays.copyOfRange(value, i*chunkSize, i*chunkSize + len); + writeAsync(bytes).addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + result.tryFailure(future.cause()); + return; + } + + int j = i + 1; + if (j*chunkSize > value.length) { + result.trySuccess(null); + } else { + write(value, result, chunkSize, j); + } + } + }); + } + + private String getPartsName() { + return getName() + ":parts"; + } + + @Override + public RFuture deleteAsync() { + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_BOOLEAN_AMOUNT, + "local parts = redis.call('get', KEYS[2]); " + + "local names = {KEYS[1], KEYS[2]};" + + "if parts ~= false then " + + "for i = 1, tonumber(parts)-1, 1 do " + + "table.insert(names, KEYS[1] .. ':' .. i); " + + "end; " + + "end;" + + "return redis.call('del', unpack(names));", + Arrays.asList(getName(), getPartsName())); + + } + +} diff --git a/redisson/src/main/java/org/redisson/RedissonBlockingDeque.java b/redisson/src/main/java/org/redisson/RedissonBlockingDeque.java index 74e20e0f2..49a51da4f 100644 --- a/redisson/src/main/java/org/redisson/RedissonBlockingDeque.java +++ b/redisson/src/main/java/org/redisson/RedissonBlockingDeque.java @@ -231,7 +231,7 @@ public class RedissonBlockingDeque extends RedissonDeque implements RBlock for (Object name : queueNames) { params.add(name); } - params.add(unit.toSeconds(timeout)); + params.add(toSeconds(timeout, unit)); return commandExecutor.writeAsync(getName(), codec, RedisCommands.BRPOP_VALUE, params.toArray()); } @@ -243,7 +243,7 @@ public class RedissonBlockingDeque extends RedissonDeque implements RBlock @Override public RFuture pollLastAsync(long timeout, TimeUnit unit) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.BRPOP_VALUE, getName(), unit.toSeconds(timeout)); + return commandExecutor.writeAsync(getName(), codec, RedisCommands.BRPOP_VALUE, getName(), toSeconds(timeout, unit)); } @Override diff --git a/redisson/src/main/java/org/redisson/RedissonBlockingFairQueue.java b/redisson/src/main/java/org/redisson/RedissonBlockingFairQueue.java new file mode 100644 index 000000000..c82214cf8 --- /dev/null +++ b/redisson/src/main/java/org/redisson/RedissonBlockingFairQueue.java @@ -0,0 +1,781 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.util.Arrays; +import java.util.Collections; +import java.util.UUID; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; + +import org.redisson.api.RBlockingFairQueue; +import org.redisson.api.RFuture; +import org.redisson.client.codec.Codec; +import org.redisson.client.codec.LongCodec; +import org.redisson.client.codec.StringCodec; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandExecutor; +import org.redisson.misc.RPromise; +import org.redisson.pubsub.SemaphorePubSub; + +import io.netty.util.Timeout; +import io.netty.util.TimerTask; +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.FutureListener; + +/** + * + * @author Nikita Koksharov + * + */ +public class RedissonBlockingFairQueue extends RedissonBlockingQueue implements RBlockingFairQueue { + + public static final long TIMEOUT_SECONDS = 30; + + private final UUID id; + private final AtomicInteger instances = new AtomicInteger(); + private final SemaphorePubSub semaphorePubSub; + + protected RedissonBlockingFairQueue(CommandExecutor commandExecutor, String name, SemaphorePubSub semaphorePubSub, UUID id) { + super(commandExecutor, name); + this.semaphorePubSub = semaphorePubSub; + this.id = id; + instances.incrementAndGet(); + } + + protected RedissonBlockingFairQueue(Codec codec, CommandExecutor commandExecutor, String name, SemaphorePubSub semaphorePubSub, UUID id) { + super(codec, commandExecutor, name); + this.semaphorePubSub = semaphorePubSub; + this.id = id; + instances.incrementAndGet(); + } + + private String getIdsListName() { + return suffixName(getName(), "list"); + } + + private String getTimeoutName() { + return suffixName(getName(), "timeout"); + } + + private String getChannelName() { + return suffixName(getName(), getCurrentId() + ":channel"); + } + + private RedissonLockEntry getEntry() { + return semaphorePubSub.getEntry(getName()); + } + + private RFuture subscribe() { + return semaphorePubSub.subscribe(getName(), getChannelName(), commandExecutor.getConnectionManager()); + } + + private void unsubscribe(RFuture future) { + semaphorePubSub.unsubscribe(future.getNow(), getName(), getChannelName(), commandExecutor.getConnectionManager()); + } + + @Override + public RFuture deleteAsync() { + return commandExecutor.writeAsync(getName(), RedisCommands.DEL_OBJECTS, getName(), getIdsListName(), getTimeoutName()); + } + + private Long tryAcquire() { + return get(tryAcquireAsync()); + } + + private RFuture tryAcquireAsync() { + long timeout = System.currentTimeMillis() + TIMEOUT_SECONDS*1000; + + return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_LONG, + + "local timeout = redis.call('get', KEYS[3]);" + + "if timeout ~= false and tonumber(timeout) <= tonumber(ARGV[3]) then " + + "redis.call('lpop', KEYS[2]); " + + "local nextValue = redis.call('lindex', KEYS[2], 0); " + + "if nextValue ~= false and nextValue ~= ARGV[1] then " + + "redis.call('set', KEYS[3], ARGV[2]);" + + "redis.call('publish', '{' .. KEYS[1] .. '}:' .. nextValue .. ':channel', 1);" + + "end; " + + "end; " + + + "local items = redis.call('lrange', KEYS[2], 0, -1) " + + "local found = false; " + + "for i=1,#items do " + + "if items[i] == ARGV[1] then " + + "found = true; " + + "break;" + + "end; " + + "end; " + + + "if found == false then " + + "redis.call('lpush', KEYS[2], ARGV[1]); " + + "end; " + + + "local value = redis.call('lindex', KEYS[2], 0); " + + "if value == ARGV[1] then " + + "redis.call('set', KEYS[3], ARGV[2]);" + + "local size = redis.call('llen', KEYS[2]); " + + "if size > 1 then " + + "redis.call('lpop', KEYS[2]);" + + "redis.call('rpush', KEYS[2], value);" + + "local nextValue = redis.call('lindex', KEYS[2], 0); " + + "redis.call('publish', '{' .. KEYS[1] .. '}:' .. nextValue .. ':channel', 1);" + + "end; " + + "return nil;" + + "end;" + + "return tonumber(timeout) - tonumber(ARGV[3]);", + Arrays.asList(getName(), getIdsListName(), getTimeoutName()), getCurrentId(), timeout, System.currentTimeMillis()); + } + + private String getCurrentId() { + return id.toString(); + } + + + @Override + public V take() throws InterruptedException { + Long currentTimeout = tryAcquire(); + if (currentTimeout == null) { + return super.take(); + } + + RFuture future = subscribe(); + commandExecutor.syncSubscription(future); + try { + while (true) { + currentTimeout = tryAcquire(); + if (currentTimeout == null) { + return super.take(); + } + + getEntry().getLatch().tryAcquire(currentTimeout, TimeUnit.MILLISECONDS); + } + } finally { + unsubscribe(future); + } + } + + @Override + public void destroy() { + if (instances.decrementAndGet() == 0) { + get(commandExecutor.evalWriteAsync(getName(), StringCodec.INSTANCE, RedisCommands.EVAL_VOID_WITH_VALUES, + "for i = 1, #ARGV, 1 do " + + "redis.call('lrem', KEYS[1], 0, ARGV[i]);" + +"end; ", + Collections.singletonList(getIdsListName()), getCurrentId())); + } + } + + @Override + public RFuture takeAsync() { + final RPromise promise = newPromise(); + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + final Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.takeAsync(); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + final RFuture subscribeFuture = subscribe(); + final AtomicReference futureRef = new AtomicReference(); + subscribeFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + if (futureRef.get() != null) { + futureRef.get().cancel(); + } + + tryTakeAsync(subscribeFuture, promise); + } + }); + } + } + }); + + return promise; + } + + @Override + public V poll() { + Long currentTimeout = tryAcquire(); + if (currentTimeout == null) { + return super.poll(); + } + + return null; + } + + @Override + public RFuture pollAsync() { + final RPromise promise = newPromise(); + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + final Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.pollAsync(); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + promise.trySuccess(null); + } + } + }); + + return promise; + } + + @Override + public V poll(long timeout, TimeUnit unit) throws InterruptedException { + long startTime = System.currentTimeMillis(); + Long currentTimeout = tryAcquire(); + if (currentTimeout == null) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + return super.poll(remainTime, TimeUnit.MILLISECONDS); + } + return null; + } + + RFuture future = subscribe(); + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (!future.awaitUninterruptibly(remainTime, TimeUnit.MILLISECONDS)) { + return null; + } + + try { + while (true) { + currentTimeout = tryAcquire(); + if (currentTimeout == null) { + spentTime = System.currentTimeMillis() - startTime; + remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + return super.poll(remainTime, TimeUnit.MILLISECONDS); + } + return null; + } + + spentTime = System.currentTimeMillis() - startTime; + remainTime = unit.toMillis(timeout) - spentTime; + remainTime = Math.min(remainTime, currentTimeout); + if (remainTime <= 0 || !getEntry().getLatch().tryAcquire(remainTime, TimeUnit.MILLISECONDS)) { + return null; + } + } + } finally { + unsubscribe(future); + } + } + + @Override + public RFuture pollAsync(final long timeout, final TimeUnit unit) { + final long startTime = System.currentTimeMillis(); + final RPromise promise = newPromise(); + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.pollAsync(remainTime, TimeUnit.MILLISECONDS); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + promise.trySuccess(null); + } + } else { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + remainTime = Math.min(remainTime, currentTimeout); + if (remainTime <= 0) { + promise.trySuccess(null); + return; + } + + final RFuture subscribeFuture = subscribe(); + final AtomicReference futureRef = new AtomicReference(); + subscribeFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + if (futureRef.get() != null) { + futureRef.get().cancel(); + } + + tryPollAsync(startTime, timeout, unit, subscribeFuture, promise); + } + }); + if (!subscribeFuture.isDone()) { + Timeout scheduledFuture = commandExecutor.getConnectionManager().newTimeout(new TimerTask() { + @Override + public void run(Timeout timeout) throws Exception { + if (!subscribeFuture.isDone()) { + subscribeFuture.cancel(false); + promise.trySuccess(null); + } + } + }, remainTime, TimeUnit.MILLISECONDS); + futureRef.set(scheduledFuture); + } + } + } + }); + + return promise; + } + + private void tryTakeAsync(final RFuture subscribeFuture, final RPromise promise) { + if (promise.isDone()) { + unsubscribe(subscribeFuture); + return; + } + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + unsubscribe(subscribeFuture); + promise.tryFailure(future.cause()); + return; + } + + Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.takeAsync(); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + unsubscribe(subscribeFuture); + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + final RedissonLockEntry entry = getEntry(); + synchronized (entry) { + if (entry.getLatch().tryAcquire()) { + tryTakeAsync(subscribeFuture, promise); + } else { + final AtomicBoolean executed = new AtomicBoolean(); + final AtomicReference futureRef = new AtomicReference(); + final Runnable listener = new Runnable() { + @Override + public void run() { + executed.set(true); + if (futureRef.get() != null) { + futureRef.get().cancel(); + } + + tryTakeAsync(subscribeFuture, promise); + } + }; + entry.addListener(listener); + + if (!executed.get()) { + Timeout scheduledFuture = commandExecutor.getConnectionManager().newTimeout(new TimerTask() { + @Override + public void run(Timeout t) throws Exception { + synchronized (entry) { + if (entry.removeListener(listener)) { + tryTakeAsync(subscribeFuture, promise); + } + } + } + }, currentTimeout, TimeUnit.MILLISECONDS); + futureRef.set(scheduledFuture); + } + } + } + } + }; + }); + } + + private void tryPollAsync(final long startTime, final long timeout, final TimeUnit unit, + final RFuture subscribeFuture, final RPromise promise) { + if (promise.isDone()) { + unsubscribe(subscribeFuture); + return; + } + + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime <= 0) { + unsubscribe(subscribeFuture); + promise.trySuccess(null); + return; + } + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + unsubscribe(subscribeFuture); + promise.tryFailure(future.cause()); + return; + } + + Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.pollAsync(remainTime, TimeUnit.MILLISECONDS); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + unsubscribe(subscribeFuture); + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + unsubscribe(subscribeFuture); + promise.trySuccess(null); + } + } else { + final RedissonLockEntry entry = getEntry(); + synchronized (entry) { + if (entry.getLatch().tryAcquire()) { + tryPollAsync(startTime, timeout, unit, subscribeFuture, promise); + } else { + final AtomicBoolean executed = new AtomicBoolean(); + final AtomicReference futureRef = new AtomicReference(); + final Runnable listener = new Runnable() { + @Override + public void run() { + executed.set(true); + if (futureRef.get() != null) { + futureRef.get().cancel(); + } + + tryPollAsync(startTime, timeout, unit, subscribeFuture, promise); + } + }; + entry.addListener(listener); + + if (!executed.get()) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + Timeout scheduledFuture = commandExecutor.getConnectionManager().newTimeout(new TimerTask() { + @Override + public void run(Timeout t) throws Exception { + synchronized (entry) { + if (entry.removeListener(listener)) { + tryPollAsync(startTime, timeout, unit, subscribeFuture, promise); + } + } + } + }, remainTime, TimeUnit.MILLISECONDS); + futureRef.set(scheduledFuture); + } + } + } + } + }; + }); + } + + @Override + public V pollLastAndOfferFirstTo(String queueName, long timeout, TimeUnit unit) throws InterruptedException { + long startTime = System.currentTimeMillis(); + Long currentTimeout = tryAcquire(); + if (currentTimeout == null) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + return super.pollLastAndOfferFirstTo(queueName, remainTime, TimeUnit.MILLISECONDS); + } + return null; + } + + RFuture future = subscribe(); + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (!future.awaitUninterruptibly(remainTime, TimeUnit.MILLISECONDS)) { + return null; + } + + try { + while (true) { + currentTimeout = tryAcquire(); + if (currentTimeout == null) { + spentTime = System.currentTimeMillis() - startTime; + remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + return super.pollLastAndOfferFirstTo(queueName, remainTime, TimeUnit.MILLISECONDS); + } + return null; + } + + spentTime = System.currentTimeMillis() - startTime; + remainTime = unit.toMillis(timeout) - spentTime; + remainTime = Math.min(remainTime, currentTimeout); + if (remainTime <= 0 || !getEntry().getLatch().tryAcquire(remainTime, TimeUnit.MILLISECONDS)) { + return null; + } + } + } finally { + unsubscribe(future); + } + } + + @Override + public RFuture pollLastAndOfferFirstToAsync(final String queueName, final long timeout, final TimeUnit unit) { + final long startTime = System.currentTimeMillis(); + final RPromise promise = newPromise(); + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.pollLastAndOfferFirstToAsync(queueName, remainTime, TimeUnit.MILLISECONDS); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + promise.trySuccess(null); + } + } else { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + remainTime = Math.min(remainTime, currentTimeout); + if (remainTime <= 0) { + promise.trySuccess(null); + return; + } + + final RFuture subscribeFuture = subscribe(); + final AtomicReference futureRef = new AtomicReference(); + subscribeFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + if (futureRef.get() != null) { + futureRef.get().cancel(); + } + + tryPollLastAndOfferFirstToAsync(startTime, timeout, unit, subscribeFuture, promise, queueName); + } + }); + if (!subscribeFuture.isDone()) { + Timeout scheduledFuture = commandExecutor.getConnectionManager().newTimeout(new TimerTask() { + @Override + public void run(Timeout timeout) throws Exception { + if (!subscribeFuture.isDone()) { + subscribeFuture.cancel(false); + promise.trySuccess(null); + } + } + }, remainTime, TimeUnit.MILLISECONDS); + futureRef.set(scheduledFuture); + } + } + } + }); + + return promise; + } + + private void tryPollLastAndOfferFirstToAsync(final long startTime, final long timeout, final TimeUnit unit, + final RFuture subscribeFuture, final RPromise promise, final String queueName) { + if (promise.isDone()) { + unsubscribe(subscribeFuture); + return; + } + + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime <= 0) { + unsubscribe(subscribeFuture); + promise.trySuccess(null); + return; + } + + RFuture tryAcquireFuture = tryAcquireAsync(); + tryAcquireFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + unsubscribe(subscribeFuture); + promise.tryFailure(future.cause()); + return; + } + + Long currentTimeout = future.getNow(); + if (currentTimeout == null) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + if (remainTime > 0) { + final RFuture pollFuture = RedissonBlockingFairQueue.super.pollLastAndOfferFirstToAsync(queueName, remainTime, TimeUnit.MILLISECONDS); + pollFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + unsubscribe(subscribeFuture); + if (!future.isSuccess()) { + promise.tryFailure(future.cause()); + return; + } + + promise.trySuccess(future.getNow()); + } + }); + } else { + unsubscribe(subscribeFuture); + promise.trySuccess(null); + } + } else { + final RedissonLockEntry entry = getEntry(); + synchronized (entry) { + if (entry.getLatch().tryAcquire()) { + tryPollAsync(startTime, timeout, unit, subscribeFuture, promise); + } else { + final AtomicBoolean executed = new AtomicBoolean(); + final AtomicReference futureRef = new AtomicReference(); + final Runnable listener = new Runnable() { + @Override + public void run() { + executed.set(true); + if (futureRef.get() != null) { + futureRef.get().cancel(); + } + + tryPollLastAndOfferFirstToAsync(startTime, timeout, unit, subscribeFuture, promise, queueName); + } + }; + entry.addListener(listener); + + if (!executed.get()) { + long spentTime = System.currentTimeMillis() - startTime; + long remainTime = unit.toMillis(timeout) - spentTime; + Timeout scheduledFuture = commandExecutor.getConnectionManager().newTimeout(new TimerTask() { + @Override + public void run(Timeout t) throws Exception { + synchronized (entry) { + if (entry.removeListener(listener)) { + tryPollLastAndOfferFirstToAsync(startTime, timeout, unit, subscribeFuture, promise, queueName); + } + } + } + }, remainTime, TimeUnit.MILLISECONDS); + futureRef.set(scheduledFuture); + } + } + } + } + }; + }); + } + + +} diff --git a/redisson/src/main/java/org/redisson/RedissonBlockingQueue.java b/redisson/src/main/java/org/redisson/RedissonBlockingQueue.java index 98a9ec2e2..3ecfc38ce 100644 --- a/redisson/src/main/java/org/redisson/RedissonBlockingQueue.java +++ b/redisson/src/main/java/org/redisson/RedissonBlockingQueue.java @@ -84,7 +84,7 @@ public class RedissonBlockingQueue extends RedissonQueue implements RBlock @Override public RFuture pollAsync(long timeout, TimeUnit unit) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.BLPOP_VALUE, getName(), unit.toSeconds(timeout)); + return commandExecutor.writeAsync(getName(), codec, RedisCommands.BLPOP_VALUE, getName(), toSeconds(timeout, unit)); } /* @@ -118,13 +118,13 @@ public class RedissonBlockingQueue extends RedissonQueue implements RBlock for (Object name : queueNames) { params.add(name); } - params.add(unit.toSeconds(timeout)); + params.add(toSeconds(timeout, unit)); return commandExecutor.writeAsync(getName(), codec, RedisCommands.BLPOP_VALUE, params.toArray()); } @Override public RFuture pollLastAndOfferFirstToAsync(String queueName, long timeout, TimeUnit unit) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.BRPOPLPUSH, getName(), queueName, unit.toSeconds(timeout)); + return commandExecutor.writeAsync(getName(), codec, RedisCommands.BRPOPLPUSH, getName(), queueName, toSeconds(timeout, unit)); } @Override diff --git a/redisson/src/main/java/org/redisson/RedissonBloomFilter.java b/redisson/src/main/java/org/redisson/RedissonBloomFilter.java index 88cbf8daa..0818a0d92 100644 --- a/redisson/src/main/java/org/redisson/RedissonBloomFilter.java +++ b/redisson/src/main/java/org/redisson/RedissonBloomFilter.java @@ -35,7 +35,6 @@ import org.redisson.client.protocol.decoder.ObjectMapReplayDecoder; import org.redisson.command.CommandBatchService; import org.redisson.command.CommandExecutor; -import io.netty.util.concurrent.Future; import net.openhft.hashing.LongHashFunction; /** @@ -212,9 +211,19 @@ public class RedissonBloomFilter extends RedissonExpirable implements RBloomF @Override public boolean tryInit(long expectedInsertions, double falseProbability) { + if (falseProbability > 1) { + throw new IllegalArgumentException("Bloom filter false probability can't be greater than 1"); + } + if (falseProbability < 0) { + throw new IllegalArgumentException("Bloom filter false probability can't be negative"); + } + size = optimalNumOfBits(expectedInsertions, falseProbability); + if (size == 0) { + throw new IllegalArgumentException("Bloom filter calculated size is " + size); + } if (size > MAX_SIZE) { - throw new IllegalArgumentException("Bloom filter can't be greater than " + MAX_SIZE + ". But calculated size is " + size); + throw new IllegalArgumentException("Bloom filter size can't be greater than " + MAX_SIZE + ". But calculated size is " + size); } hashIterations = optimalNumOfHashFunctions(expectedInsertions, size); diff --git a/redisson/src/main/java/org/redisson/RedissonBucket.java b/redisson/src/main/java/org/redisson/RedissonBucket.java index 62279ffd5..66c852174 100644 --- a/redisson/src/main/java/org/redisson/RedissonBucket.java +++ b/redisson/src/main/java/org/redisson/RedissonBucket.java @@ -24,6 +24,12 @@ import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommands; import org.redisson.command.CommandAsyncExecutor; +/** + * + * @author Nikita Koksharov + * + * @param value type + */ public class RedissonBucket extends RedissonExpirable implements RBucket { protected RedissonBucket(CommandAsyncExecutor connectionManager, String name) { @@ -97,12 +103,12 @@ public class RedissonBucket extends RedissonExpirable implements RBucket { } @Override - public int size() { + public long size() { return get(sizeAsync()); } @Override - public RFuture sizeAsync() { + public RFuture sizeAsync() { return commandExecutor.readAsync(getName(), codec, RedisCommands.STRLEN, getName()); } diff --git a/redisson/src/main/java/org/redisson/RedissonCountDownLatch.java b/redisson/src/main/java/org/redisson/RedissonCountDownLatch.java index 569d2ca62..e110550d6 100644 --- a/redisson/src/main/java/org/redisson/RedissonCountDownLatch.java +++ b/redisson/src/main/java/org/redisson/RedissonCountDownLatch.java @@ -50,9 +50,9 @@ public class RedissonCountDownLatch extends RedissonObject implements RCountDown } public void await() throws InterruptedException { - RFuture promise = subscribe(); + RFuture future = subscribe(); try { - get(promise); + commandExecutor.syncSubscription(future); while (getCount() > 0) { // waiting for open state @@ -62,7 +62,7 @@ public class RedissonCountDownLatch extends RedissonObject implements RCountDown } } } finally { - unsubscribe(promise); + unsubscribe(future); } } diff --git a/redisson/src/main/java/org/redisson/RedissonDelayedQueue.java b/redisson/src/main/java/org/redisson/RedissonDelayedQueue.java new file mode 100644 index 000000000..bd9caf60d --- /dev/null +++ b/redisson/src/main/java/org/redisson/RedissonDelayedQueue.java @@ -0,0 +1,502 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.util.Arrays; +import java.util.Collection; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.concurrent.TimeUnit; + +import org.redisson.api.RDelayedQueue; +import org.redisson.api.RFuture; +import org.redisson.api.RQueue; +import org.redisson.api.RTopic; +import org.redisson.client.codec.Codec; +import org.redisson.client.codec.LongCodec; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.client.protocol.convertor.BooleanReplayConvertor; +import org.redisson.client.protocol.convertor.VoidReplayConvertor; +import org.redisson.command.CommandAsyncExecutor; + +import io.netty.util.internal.ThreadLocalRandom; + +/** + * + * @author Nikita Koksharov + * + * @param value type + */ +public class RedissonDelayedQueue extends RedissonExpirable implements RDelayedQueue { + + private static final RedisCommand EVAL_OFFER = new RedisCommand("EVAL", new VoidReplayConvertor(), 9); + + private final QueueTransferService queueTransferService; + + protected RedissonDelayedQueue(QueueTransferService queueTransferService, Codec codec, final CommandAsyncExecutor commandExecutor, String name) { + super(codec, commandExecutor, name); + + QueueTransferTask task = new QueueTransferTask(commandExecutor.getConnectionManager()) { + + @Override + protected RFuture pushTaskAsync() { + return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_LONG, + "local expiredValues = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " + + "if #expiredValues > 0 then " + + "for i, v in ipairs(expiredValues) do " + + "local randomId, value = struct.unpack('dLc0', v);" + + "redis.call('rpush', KEYS[1], value);" + + "redis.call('lrem', KEYS[3], 1, v);" + + "end; " + + "redis.call('zrem', KEYS[2], unpack(expiredValues));" + + "end; " + // get startTime from scheduler queue head task + + "local v = redis.call('zrange', KEYS[2], 0, 0, 'WITHSCORES'); " + + "if v[1] ~= nil then " + + "return v[2]; " + + "end " + + "return nil;", + Arrays.asList(getName(), getTimeoutSetName(), getQueueName()), + System.currentTimeMillis(), 100); + } + + @Override + protected RTopic getTopic() { + return new RedissonTopic(LongCodec.INSTANCE, commandExecutor, getChannelName()); + } + }; + + queueTransferService.schedule(getQueueName(), task); + + this.queueTransferService = queueTransferService; + } + + private String getChannelName() { + return prefixName("redisson_delay_queue_channel", getName()); + } + + private String getQueueName() { + return prefixName("redisson_delay_queue", getName()); + } + + private String getTimeoutSetName() { + return prefixName("redisson_delay_queue_timeout", getName()); + } + + public void offer(V e, long delay, TimeUnit timeUnit) { + get(offerAsync(e, delay, timeUnit)); + } + + public RFuture offerAsync(V e, long delay, TimeUnit timeUnit) { + long delayInMs = timeUnit.toMillis(delay); + long timeout = System.currentTimeMillis() + delayInMs; + + long randomId = ThreadLocalRandom.current().nextLong(); + return commandExecutor.evalWriteAsync(getName(), codec, EVAL_OFFER, + "local value = struct.pack('dLc0', tonumber(ARGV[2]), string.len(ARGV[3]), ARGV[3]);" + + "redis.call('zadd', KEYS[2], ARGV[1], value);" + + "redis.call('rpush', KEYS[3], value);" + // if new object added to queue head when publish its startTime + // to all scheduler workers + + "local v = redis.call('zrange', KEYS[2], 0, 0); " + + "if v[1] == value then " + + "redis.call('publish', KEYS[4], ARGV[1]); " + + "end;" + , + Arrays.asList(getName(), getTimeoutSetName(), getQueueName(), getChannelName()), + timeout, randomId, e); + } + + @Override + public boolean add(V e) { + throw new UnsupportedOperationException("Use 'offer' method with timeout param"); + } + + @Override + public boolean offer(V e) { + throw new UnsupportedOperationException("Use 'offer' method with timeout param"); + } + + @Override + public V remove() { + V value = poll(); + if (value == null) { + throw new NoSuchElementException(); + } + return value; + } + + @Override + public V poll() { + return get(pollAsync()); + } + + @Override + public V element() { + V value = peek(); + if (value == null) { + throw new NoSuchElementException(); + } + return value; + } + + @Override + public V peek() { + return get(peekAsync()); + } + + @Override + public int size() { + return get(sizeAsync()); + } + + @Override + public boolean isEmpty() { + return size() == 0; + } + + @Override + public boolean contains(Object o) { + return get(containsAsync(o)); + } + + V getValue(int index) { + return (V)get(commandExecutor.evalReadAsync(getName(), codec, RedisCommands.EVAL_OBJECT, + "local v = redis.call('lindex', KEYS[1], ARGV[1]); " + + "if v ~= false then " + + "local randomId, value = struct.unpack('dLc0', v);" + + "return value; " + + "end " + + "return nil;", + Arrays.asList(getQueueName()), index)); + } + + void remove(int index) { + get(commandExecutor.evalWriteAsync(getName(), null, RedisCommands.EVAL_VOID, + "local v = redis.call('lindex', KEYS[1], ARGV[1]);" + + "if v ~= false then " + + "local randomId, value = struct.unpack('dLc0', v);" + + "redis.call('lrem', KEYS[1], 1, v);" + + "redis.call('zrem', KEYS[2], v);" + + "end; ", + Arrays.asList(getQueueName(), getTimeoutSetName()), index)); + } + + @Override + public Iterator iterator() { + return new Iterator() { + + private V nextCurrentValue; + private V currentValueHasRead; + private int currentIndex = -1; + private boolean hasBeenModified = true; + + @Override + public boolean hasNext() { + V val = RedissonDelayedQueue.this.getValue(currentIndex+1); + if (val != null) { + nextCurrentValue = val; + } + return val != null; + } + + @Override + public V next() { + if (nextCurrentValue == null && !hasNext()) { + throw new NoSuchElementException("No such element at index " + currentIndex); + } + currentIndex++; + currentValueHasRead = nextCurrentValue; + nextCurrentValue = null; + hasBeenModified = false; + return currentValueHasRead; + } + + @Override + public void remove() { + if (currentValueHasRead == null) { + throw new IllegalStateException("Neither next nor previous have been called"); + } + if (hasBeenModified) { + throw new IllegalStateException("Element been already deleted"); + } + RedissonDelayedQueue.this.remove(currentIndex); + currentIndex--; + hasBeenModified = true; + currentValueHasRead = null; + } + + }; + } + + @Override + public Object[] toArray() { + List list = readAll(); + return list.toArray(); + } + + @Override + public T[] toArray(T[] a) { + List list = readAll(); + return list.toArray(a); + } + + @Override + public List readAll() { + return get(readAllAsync()); + } + + @Override + public RFuture> readAllAsync() { + return commandExecutor.evalReadAsync(getName(), codec, RedisCommands.EVAL_LIST, + "local result = {}; " + + "local items = redis.call('lrange', KEYS[1], 0, -1); " + + "for i, v in ipairs(items) do " + + "local randomId, value = struct.unpack('dLc0', v); " + + "table.insert(result, value);" + + "end; " + + "return result; ", + Collections.singletonList(getQueueName())); + } + + @Override + public boolean remove(Object o) { + return get(removeAsync(o)); + } + + @Override + public RFuture removeAsync(Object o) { + return removeAsync(o, 1); + } + + protected RFuture removeAsync(Object o, int count) { + return commandExecutor.evalWriteAsync(getName(), codec, new RedisCommand("EVAL", new BooleanReplayConvertor(), 4), + "local s = redis.call('llen', KEYS[1]);" + + "for i = 0, s-1, 1 do " + + "local v = redis.call('lindex', KEYS[1], i);" + + "local randomId, value = struct.unpack('dLc0', v);" + + "if ARGV[1] == value then " + + "redis.call('lrem', KEYS[1], 1, v);" + + "return 1;" + + "end; " + + "end;" + + "return 0;", + Collections.singletonList(getQueueName()), o); + } + + @Override + public RFuture containsAllAsync(Collection c) { + if (c.isEmpty()) { + return newSucceededFuture(true); + } + + return commandExecutor.evalReadAsync(getName(), codec, RedisCommands.EVAL_BOOLEAN_WITH_VALUES, + "local s = redis.call('llen', KEYS[1]);" + + "for i = 0, s-1, 1 do " + + "local v = redis.call('lindex', KEYS[1], i);" + + "local randomId, value = struct.unpack('dLc0', v);" + + + "for j = 1, #ARGV, 1 do " + + "if value == ARGV[j] then " + + "table.remove(ARGV, j) " + + "end; " + + "end; " + + "end;" + + "return #ARGV == 0 and 1 or 0;", + Collections.singletonList(getQueueName()), c.toArray()); + } + + @Override + public boolean containsAll(Collection c) { + return get(containsAllAsync(c)); + } + + @Override + public boolean addAll(Collection c) { + throw new UnsupportedOperationException("Use 'offer' method with timeout param"); + } + + @Override + public RFuture removeAllAsync(Collection c) { + if (c.isEmpty()) { + return newSucceededFuture(false); + } + + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_BOOLEAN_WITH_VALUES, + "local result = 0;" + + "local s = redis.call('llen', KEYS[1]);" + + "local i = 0;" + + "while i < s do " + + "local v = redis.call('lindex', KEYS[1], i);" + + "local randomId, value = struct.unpack('dLc0', v);" + + + "for j = 1, #ARGV, 1 do " + + "if value == ARGV[j] then " + + "result = 1; " + + "i = i - 1; " + + "s = s - 1; " + + "redis.call('lrem', KEYS[1], 0, v); " + + "break; " + + "end; " + + "end; " + + "i = i + 1;" + + "end; " + + "return result;", + Collections.singletonList(getQueueName()), c.toArray()); + } + + @Override + public boolean removeAll(Collection c) { + return get(removeAllAsync(c)); + } + + @Override + public boolean retainAll(Collection c) { + return get(retainAllAsync(c)); + } + + @Override + public RFuture retainAllAsync(Collection c) { + if (c.isEmpty()) { + return deleteAsync(); + } + + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_BOOLEAN_WITH_VALUES, + "local changed = 0; " + + "local items = redis.call('lrange', KEYS[1], 0, -1); " + + "local i = 1; " + + "while i <= #items do " + + "local randomId, element = struct.unpack('dLc0', items[i]); " + + "local isInAgrs = false; " + + "for j = 1, #ARGV, 1 do " + + "if ARGV[j] == element then " + + "isInAgrs = true; " + + "break; " + + "end; " + + "end; " + + "if isInAgrs == false then " + + "redis.call('LREM', KEYS[1], 0, items[i]) " + + "changed = 1; " + + "end; " + + "i = i + 1; " + + "end; " + + "return changed; ", + Collections.singletonList(getQueueName()), c.toArray()); + } + + @Override + public void clear() { + delete(); + } + + @Override + public RFuture deleteAsync() { + return commandExecutor.writeAsync(getName(), RedisCommands.DEL_OBJECTS, getQueueName(), getTimeoutSetName()); + } + + @Override + public RFuture peekAsync() { + return commandExecutor.evalReadAsync(getName(), codec, RedisCommands.EVAL_OBJECT, + "local v = redis.call('lindex', KEYS[1], 0); " + + "if v ~= nil then " + + "local randomId, value = struct.unpack('dLc0', v);" + + "return value; " + + "end " + + "return nil;", + Arrays.asList(getQueueName())); + } + + @Override + public RFuture pollAsync() { + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_OBJECT, + "local v = redis.call('lpop', KEYS[1]); " + + "if v ~= nil then " + + "redis.call('zrem', KEYS[2], v); " + + "local randomId, value = struct.unpack('dLc0', v);" + + "return value; " + + "end " + + "return nil;", + Arrays.asList(getQueueName(), getTimeoutSetName())); + } + + @Override + public RFuture offerAsync(V e) { + throw new UnsupportedOperationException("Use 'offer' method with timeout param"); + } + + @Override + public RFuture pollLastAndOfferFirstToAsync(String queueName) { + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_OBJECT, + "local v = redis.call('rpop', KEYS[1]); " + + "if v ~= nil then " + + "redis.call('zrem', KEYS[2], v); " + + "local randomId, value = struct.unpack('dLc0', v);" + + "redis.call('lpush', KEYS[3], value); " + + "return value; " + + "end " + + "return nil;", + Arrays.asList(getQueueName(), getTimeoutSetName(), queueName)); + } + + @Override + public RFuture containsAsync(Object o) { + return commandExecutor.evalReadAsync(getName(), codec, new RedisCommand("EVAL", new BooleanReplayConvertor(), 4), + "local s = redis.call('llen', KEYS[1]);" + + "for i = 0, s-1, 1 do " + + "local v = redis.call('lindex', KEYS[1], i);" + + "local randomId, value = struct.unpack('dLc0', v);" + + "if ARGV[1] == value then " + + "return 1;" + + "end; " + + "end;" + + "return 0;", + Collections.singletonList(getQueueName()), o); + } + + @Override + public RFuture sizeAsync() { + return commandExecutor.readAsync(getName(), codec, RedisCommands.LLEN_INT, getQueueName()); + } + + @Override + public RFuture addAsync(V e) { + throw new UnsupportedOperationException("Use 'offer' method with timeout param"); + } + + @Override + public RFuture addAllAsync(Collection c) { + throw new UnsupportedOperationException("Use 'offer' method with timeout param"); + } + + @Override + public V pollLastAndOfferFirstTo(String dequeName) { + return get(pollLastAndOfferFirstToAsync(dequeName)); + } + + @Override + public V pollLastAndOfferFirstTo(RQueue deque) { + return get(pollLastAndOfferFirstToAsync(deque.getName())); + } + + @Override + public void destroy() { + queueTransferService.remove(getQueueName()); + } + +} diff --git a/redisson/src/main/java/org/redisson/RedissonDeque.java b/redisson/src/main/java/org/redisson/RedissonDeque.java index 2eb0e58f4..0b4e05e16 100644 --- a/redisson/src/main/java/org/redisson/RedissonDeque.java +++ b/redisson/src/main/java/org/redisson/RedissonDeque.java @@ -25,8 +25,8 @@ import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.convertor.VoidReplayConvertor; +import org.redisson.client.protocol.decoder.ListFirstObjectDecoder; import org.redisson.command.CommandAsyncExecutor; -import org.redisson.connection.decoder.ListFirstObjectDecoder; /** * Distributed and concurrent implementation of {@link java.util.Queue} diff --git a/redisson/src/main/java/org/redisson/RedissonExecutorService.java b/redisson/src/main/java/org/redisson/RedissonExecutorService.java index 3232b7ac3..94addf097 100644 --- a/redisson/src/main/java/org/redisson/RedissonExecutorService.java +++ b/redisson/src/main/java/org/redisson/RedissonExecutorService.java @@ -40,6 +40,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicReference; +import org.redisson.api.CronSchedule; import org.redisson.api.RFuture; import org.redisson.api.RScheduledExecutorService; import org.redisson.api.RScheduledFuture; @@ -142,98 +143,6 @@ public class RedissonExecutorService implements RScheduledExecutorService { asyncScheduledServiceAtFixed = scheduledRemoteService.get(RemoteExecutorServiceAsync.class, RemoteInvocationOptions.defaults().noAck().noResult()); } - private void registerScheduler() { - final AtomicReference timeoutReference = new AtomicReference(); - - RTopic schedulerTopic = redisson.getTopic(schedulerChannelName, LongCodec.INSTANCE); - schedulerTopic.addListener(new BaseStatusListener() { - @Override - public void onSubscribe(String channel) { - RFuture startTimeFuture = commandExecutor.evalReadAsync(schedulerQueueName, LongCodec.INSTANCE, RedisCommands.EVAL_LONG, - // get startTime from scheduler queue head task - "local v = redis.call('zrange', KEYS[1], 0, 0, 'WITHSCORES'); " - + "if v[1] ~= nil then " - + "return v[2]; " - + "end " - + "return nil;", - Collections.singletonList(schedulerQueueName)); - - addListener(timeoutReference, startTimeFuture); - } - }); - - schedulerTopic.addListener(new MessageListener() { - @Override - public void onMessage(String channel, Long startTime) { - scheduleTask(timeoutReference, startTime); - } - }); - } - - private void scheduleTask(final AtomicReference timeoutReference, final Long startTime) { - if (startTime == null) { - return; - } - - if (timeoutReference.get() != null) { - timeoutReference.get().cancel(); - timeoutReference.set(null); - } - - long delay = startTime - System.currentTimeMillis(); - if (delay > 10) { - Timeout timeout = connectionManager.newTimeout(new TimerTask() { - @Override - public void run(Timeout timeout) throws Exception { - pushTask(timeoutReference, startTime); - } - }, delay, TimeUnit.MILLISECONDS); - timeoutReference.set(timeout); - } else { - pushTask(timeoutReference, startTime); - } - } - - private void pushTask(AtomicReference timeoutReference, Long startTime) { - RFuture startTimeFuture = commandExecutor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_LONG, - "local expiredTaskIds = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " - + "if #expiredTaskIds > 0 then " - + "redis.call('zrem', KEYS[2], unpack(expiredTaskIds));" - + "local expiredTasks = redis.call('hmget', KEYS[3], unpack(expiredTaskIds));" - + "redis.call('rpush', KEYS[1], unpack(expiredTasks));" - + "end; " - // get startTime from scheduler queue head task - + "local v = redis.call('zrange', KEYS[2], 0, 0, 'WITHSCORES'); " - + "if v[1] ~= nil then " - + "return v[2]; " - + "end " - + "return nil;", - Arrays.asList(requestQueueName, schedulerQueueName, schedulerTasksName), - System.currentTimeMillis(), 10); - - addListener(timeoutReference, startTimeFuture); - } - - private void addListener(final AtomicReference timeoutReference, RFuture startTimeFuture) { - startTimeFuture.addListener(new FutureListener() { - @Override - public void operationComplete(io.netty.util.concurrent.Future future) throws Exception { - if (!future.isSuccess()) { - if (future.cause() instanceof RedissonShutdownException) { - return; - } - log.error(future.cause().getMessage(), future.cause()); - scheduleTask(timeoutReference, System.currentTimeMillis() + 5 * 1000L); - return; - } - - if (future.getNow() != null) { - scheduleTask(timeoutReference, future.getNow()); - } - } - }); - } - @Override public void registerWorkers(int workers) { registerWorkers(workers, commandExecutor.getConnectionManager().getExecutor()); @@ -241,7 +150,32 @@ public class RedissonExecutorService implements RScheduledExecutorService { @Override public void registerWorkers(int workers, ExecutorService executor) { - registerScheduler(); + QueueTransferTask scheduler = new QueueTransferTask(connectionManager) { + @Override + protected RTopic getTopic() { + return new RedissonTopic(LongCodec.INSTANCE, commandExecutor, schedulerChannelName); + } + + @Override + protected RFuture pushTaskAsync() { + return commandExecutor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_LONG, + "local expiredTaskIds = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " + + "if #expiredTaskIds > 0 then " + + "redis.call('zrem', KEYS[2], unpack(expiredTaskIds));" + + "local expiredTasks = redis.call('hmget', KEYS[3], unpack(expiredTaskIds));" + + "redis.call('rpush', KEYS[1], unpack(expiredTasks));" + + "end; " + // get startTime from scheduler queue head task + + "local v = redis.call('zrange', KEYS[2], 0, 0, 'WITHSCORES'); " + + "if v[1] ~= nil then " + + "return v[2]; " + + "end " + + "return nil;", + Arrays.asList(requestQueueName, schedulerQueueName, schedulerTasksName), + System.currentTimeMillis(), 100); + } + }; + scheduler.start(); RemoteExecutorServiceImpl service = new RemoteExecutorServiceImpl(commandExecutor, redisson, codec, requestQueueName); diff --git a/redisson/src/main/java/org/redisson/RedissonExpirable.java b/redisson/src/main/java/org/redisson/RedissonExpirable.java index ea020d1bc..84ba5a9d7 100644 --- a/redisson/src/main/java/org/redisson/RedissonExpirable.java +++ b/redisson/src/main/java/org/redisson/RedissonExpirable.java @@ -25,6 +25,11 @@ import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.RedisCommands; import org.redisson.command.CommandAsyncExecutor; +/** + * + * @author Nikita Koksharov + * + */ abstract class RedissonExpirable extends RedissonObject implements RExpirable { RedissonExpirable(CommandAsyncExecutor connectionManager, String name) { diff --git a/redisson/src/main/java/org/redisson/RedissonFairLock.java b/redisson/src/main/java/org/redisson/RedissonFairLock.java index 73dd905fe..6b250f8bc 100644 --- a/redisson/src/main/java/org/redisson/RedissonFairLock.java +++ b/redisson/src/main/java/org/redisson/RedissonFairLock.java @@ -48,11 +48,11 @@ public class RedissonFairLock extends RedissonLock implements RLock { } String getThreadsQueueName() { - return "redisson_lock_queue:{" + getName() + "}"; + return prefixName("redisson_lock_queue", getName()); } - String getThreadElementName(long threadId) { - return "redisson_lock_thread:{" + getName() + "}:" + getLockName(threadId); + String getTimeoutSetName() { + return prefixName("redisson_lock_timeout", getName()); } @Override @@ -72,11 +72,20 @@ public class RedissonFairLock extends RedissonLock implements RLock { getChannelName() + ":" + getLockName(threadId), commandExecutor.getConnectionManager()); } + @Override + protected RFuture acquireFailedAsync(long threadId) { + return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_VOID, + "redis.call('zrem', KEYS[2], ARGV[1]); " + + "redis.call('lrem', KEYS[1], 0, ARGV[1]); ", + Arrays.asList(getThreadsQueueName(), getTimeoutSetName()), getLockName(threadId)); + } + @Override RFuture tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand command) { internalLockLeaseTime = unit.toMillis(leaseTime); long threadWaitTime = 5000; + long currentTime = System.currentTimeMillis(); if (command == RedisCommands.EVAL_NULL_BOOLEAN) { return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, command, // remove stale threads @@ -85,7 +94,9 @@ public class RedissonFairLock extends RedissonLock implements RLock { + "if firstThreadId2 == false then " + "break;" + "end; " - + "if redis.call('exists', 'redisson_lock_thread:{' .. KEYS[1] .. '}:' .. firstThreadId2) == 0 then " + + "local timeout = tonumber(redis.call('zscore', KEYS[3], firstThreadId2));" + + "if timeout <= tonumber(ARGV[3]) then " + + "redis.call('zrem', KEYS[3], firstThreadId2); " + "redis.call('lpop', KEYS[2]); " + "else " + "break;" @@ -96,7 +107,7 @@ public class RedissonFairLock extends RedissonLock implements RLock { "if (redis.call('exists', KEYS[1]) == 0) and ((redis.call('exists', KEYS[2]) == 0) " + "or (redis.call('lindex', KEYS[2], 0) == ARGV[2])) then " + "redis.call('lpop', KEYS[2]); " + - "redis.call('del', KEYS[3]); " + + "redis.call('zrem', KEYS[3], ARGV[2]); " + "redis.call('hset', KEYS[1], ARGV[2], 1); " + "redis.call('pexpire', KEYS[1], ARGV[1]); " + "return nil; " + @@ -107,7 +118,8 @@ public class RedissonFairLock extends RedissonLock implements RLock { "return nil; " + "end; " + "return 1;", - Arrays.asList(getName(), getThreadsQueueName(), getThreadElementName(threadId)), internalLockLeaseTime, getLockName(threadId)); + Arrays.asList(getName(), getThreadsQueueName(), getTimeoutSetName()), + internalLockLeaseTime, getLockName(threadId), currentTime); } if (command == RedisCommands.EVAL_LONG) { @@ -118,17 +130,19 @@ public class RedissonFairLock extends RedissonLock implements RLock { + "if firstThreadId2 == false then " + "break;" + "end; " - + "if redis.call('exists', 'redisson_lock_thread:{' .. KEYS[1] .. '}:' .. firstThreadId2) == 0 then " + + "local timeout = tonumber(redis.call('zscore', KEYS[3], firstThreadId2));" + + "if timeout <= tonumber(ARGV[4]) then " + + "redis.call('zrem', KEYS[3], firstThreadId2); " + "redis.call('lpop', KEYS[2]); " + "else " + "break;" + "end; " + "end;" - + - "if (redis.call('exists', KEYS[1]) == 0) and ((redis.call('exists', KEYS[2]) == 0) " + + + "if (redis.call('exists', KEYS[1]) == 0) and ((redis.call('exists', KEYS[2]) == 0) " + "or (redis.call('lindex', KEYS[2], 0) == ARGV[2])) then " + "redis.call('lpop', KEYS[2]); " + - "redis.call('del', KEYS[3]); " + + "redis.call('zrem', KEYS[3], ARGV[2]); " + "redis.call('hset', KEYS[1], ARGV[2], 1); " + "redis.call('pexpire', KEYS[1], ARGV[1]); " + "return nil; " + @@ -138,19 +152,22 @@ public class RedissonFairLock extends RedissonLock implements RLock { "redis.call('pexpire', KEYS[1], ARGV[1]); " + "return nil; " + "end; " + - "local firstThreadId = redis.call('lindex', KEYS[2], 0)" + - "local ttl = redis.call('pttl', KEYS[1]); " + + + "local firstThreadId = redis.call('lindex', KEYS[2], 0); " + + "local ttl; " + "if firstThreadId ~= false and firstThreadId ~= ARGV[2] then " + - "ttl = redis.call('pttl', 'redisson_lock_thread:{' .. KEYS[1] .. '}:' .. firstThreadId);" + + "ttl = tonumber(redis.call('zscore', KEYS[3], firstThreadId)) - tonumber(ARGV[4]);" + + "else " + + "ttl = redis.call('pttl', KEYS[1]);" + "end; " + - "if redis.call('exists', KEYS[3]) == 0 then " + + + "local timeout = ttl + tonumber(ARGV[3]);" + + "if redis.call('zadd', KEYS[3], timeout, ARGV[2]) == 1 then " + "redis.call('rpush', KEYS[2], ARGV[2]);" + - "redis.call('set', KEYS[3], 1);" + "end; " + - "redis.call('pexpire', KEYS[3], ttl + tonumber(ARGV[3]));" + "return ttl;", - Arrays.asList(getName(), getThreadsQueueName(), getThreadElementName(threadId)), - internalLockLeaseTime, getLockName(threadId), threadWaitTime); + Arrays.asList(getName(), getThreadsQueueName(), getTimeoutSetName()), + internalLockLeaseTime, getLockName(threadId), currentTime + threadWaitTime, currentTime); } throw new IllegalArgumentException(); @@ -158,25 +175,39 @@ public class RedissonFairLock extends RedissonLock implements RLock { @Override public void unlock() { - Boolean opStatus = commandExecutor.evalWrite(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, + Boolean opStatus = get(unlockInnerAsync(Thread.currentThread().getId())); + + if (opStatus == null) { + throw new IllegalMonitorStateException("attempt to unlock lock, not locked by current thread by node id: " + + id + " thread-id: " + Thread.currentThread().getId()); + } + if (opStatus) { + cancelExpirationRenewal(); + } + } + + @Override + protected RFuture unlockInnerAsync(long threadId) { + return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, // remove stale threads "while true do " + "local firstThreadId2 = redis.call('lindex', KEYS[2], 0);" + "if firstThreadId2 == false then " + "break;" + "end; " - + "if redis.call('exists', 'redisson_lock_thread:{' .. KEYS[1] .. '}:' .. firstThreadId2) == 0 then " + + "local timeout = tonumber(redis.call('zscore', KEYS[3], firstThreadId2));" + + "if timeout <= tonumber(ARGV[4]) then " + + "redis.call('zrem', KEYS[3], firstThreadId2); " + "redis.call('lpop', KEYS[2]); " + "else " + "break;" + "end; " + "end;" - + - "if (redis.call('exists', KEYS[1]) == 0) then " + - "local nextThreadId = redis.call('lindex', KEYS[3], 0); " + + + "if (redis.call('exists', KEYS[1]) == 0) then " + + "local nextThreadId = redis.call('lindex', KEYS[2], 0); " + "if nextThreadId ~= false then " + - "redis.call('publish', KEYS[2] .. ':' .. nextThreadId, ARGV[1]); " + + "redis.call('publish', KEYS[4] .. ':' .. nextThreadId, ARGV[1]); " + "end; " + "return 1; " + "end;" + @@ -189,29 +220,27 @@ public class RedissonFairLock extends RedissonLock implements RLock { "return 0; " + "else " + "redis.call('del', KEYS[1]); " + - "local nextThreadId = redis.call('lindex', KEYS[3], 0); " + + "local nextThreadId = redis.call('lindex', KEYS[2], 0); " + "if nextThreadId ~= false then " + - "redis.call('publish', KEYS[2] .. ':' .. nextThreadId, ARGV[1]); " + + "redis.call('publish', KEYS[4] .. ':' .. nextThreadId, ARGV[1]); " + "end; " + "return 1; "+ "end; " + "return nil;", - Arrays.asList(getName(), getChannelName(), getThreadsQueueName()), LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(Thread.currentThread().getId())); - - if (opStatus == null) { - throw new IllegalMonitorStateException("attempt to unlock lock, not locked by current thread by node id: " - + id + " thread-id: " + Thread.currentThread().getId()); - } - if (opStatus) { - cancelExpirationRenewal(); - } + Arrays.asList(getName(), getThreadsQueueName(), getTimeoutSetName(), getChannelName()), + LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(threadId), System.currentTimeMillis()); } - + @Override public Condition newCondition() { throw new UnsupportedOperationException(); } + @Override + public RFuture deleteAsync() { + return commandExecutor.writeAsync(getName(), RedisCommands.DEL_OBJECTS, getName(), getThreadsQueueName(), getTimeoutSetName()); + } + @Override public RFuture forceUnlockAsync() { cancelExpirationRenewal(); @@ -222,7 +251,9 @@ public class RedissonFairLock extends RedissonLock implements RLock { + "if firstThreadId2 == false then " + "break;" + "end; " - + "if redis.call('exists', 'redisson_lock_thread:{' .. KEYS[1] .. '}:' .. firstThreadId2) == 0 then " + + "local timeout = tonumber(redis.call('zscore', KEYS[3], firstThreadId2));" + + "if timeout <= tonumber(ARGV[2]) then " + + "redis.call('zrem', KEYS[3], firstThreadId2); " + "redis.call('lpop', KEYS[2]); " + "else " + "break;" @@ -231,14 +262,15 @@ public class RedissonFairLock extends RedissonLock implements RLock { + "if (redis.call('del', KEYS[1]) == 1) then " + - "local nextThreadId = redis.call('lindex', KEYS[3], 0); " + + "local nextThreadId = redis.call('lindex', KEYS[2], 0); " + "if nextThreadId ~= false then " + - "redis.call('publish', KEYS[2] .. ':' .. nextThreadId, ARGV[1]); " + + "redis.call('publish', KEYS[4] .. ':' .. nextThreadId, ARGV[1]); " + "end; " + - "return 1 " + - "end " + + "return 1; " + + "end; " + "return 0;", - Arrays.asList(getName(), getChannelName(), getThreadsQueueName()), LockPubSub.unlockMessage); + Arrays.asList(getName(), getThreadsQueueName(), getTimeoutSetName(), getChannelName()), + LockPubSub.unlockMessage, System.currentTimeMillis()); } } diff --git a/redisson/src/main/java/org/redisson/RedissonKeys.java b/redisson/src/main/java/org/redisson/RedissonKeys.java index 3691be374..449cbdec4 100644 --- a/redisson/src/main/java/org/redisson/RedissonKeys.java +++ b/redisson/src/main/java/org/redisson/RedissonKeys.java @@ -17,6 +17,7 @@ package org.redisson; import java.net.InetSocketAddress; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashMap; @@ -32,9 +33,11 @@ import org.redisson.api.RFuture; import org.redisson.api.RKeys; import org.redisson.api.RType; import org.redisson.client.RedisException; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; import org.redisson.command.CommandBatchService; import org.redisson.connection.MasterSlaveEntry; @@ -44,6 +47,11 @@ import org.redisson.misc.RPromise; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; +/** + * + * @author Nikita Koksharov + * + */ public class RedissonKeys implements RKeys { private final CommandAsyncExecutor commandExecutor; @@ -98,12 +106,12 @@ public class RedissonKeys implements RKeys { return getKeysByPattern(null); } - private ListScanResult scanIterator(InetSocketAddress client, MasterSlaveEntry entry, long startPos, String pattern, int count) { + private ListScanResult scanIterator(InetSocketAddress client, MasterSlaveEntry entry, long startPos, String pattern, int count) { if (pattern == null) { - RFuture> f = commandExecutor.readAsync(client, entry, StringCodec.INSTANCE, RedisCommands.SCAN, startPos, "COUNT", count); + RFuture> f = commandExecutor.readAsync(client, entry, new ScanCodec(StringCodec.INSTANCE), RedisCommands.SCAN, startPos, "COUNT", count); return commandExecutor.get(f); } - RFuture> f = commandExecutor.readAsync(client, entry, StringCodec.INSTANCE, RedisCommands.SCAN, startPos, "MATCH", pattern, "COUNT", count); + RFuture> f = commandExecutor.readAsync(client, entry, new ScanCodec(StringCodec.INSTANCE), RedisCommands.SCAN, startPos, "MATCH", pattern, "COUNT", count); return commandExecutor.get(f); } @@ -111,7 +119,7 @@ public class RedissonKeys implements RKeys { return new RedissonBaseIterator() { @Override - ListScanResult iterator(InetSocketAddress client, long nextIterPos) { + ListScanResult iterator(InetSocketAddress client, long nextIterPos) { return RedissonKeys.this.scanIterator(client, entry, nextIterPos, pattern, count); } @@ -123,6 +131,18 @@ public class RedissonKeys implements RKeys { }; } + @Override + public Long isExists(String... names) { + return commandExecutor.get(isExistsAsync(names)); + } + + @Override + public RFuture isExistsAsync(String... names) { + Object[] params = Arrays.copyOf(names, names.length, Object[].class); + return commandExecutor.readAsync((String)null, null, RedisCommands.EXISTS_LONG, params); + } + + @Override public String randomKey() { return commandExecutor.get(randomKeyAsync()); diff --git a/redisson/src/main/java/org/redisson/RedissonList.java b/redisson/src/main/java/org/redisson/RedissonList.java index 209132258..73608583b 100644 --- a/redisson/src/main/java/org/redisson/RedissonList.java +++ b/redisson/src/main/java/org/redisson/RedissonList.java @@ -34,6 +34,7 @@ import java.util.NoSuchElementException; import org.redisson.api.RFuture; import org.redisson.api.RList; +import org.redisson.api.SortOrder; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; @@ -446,7 +447,7 @@ public class RedissonList extends RedissonExpirable implements RList { } @Override - public RFuture trimAsync(long fromIndex, long toIndex) { + public RFuture trimAsync(int fromIndex, int toIndex) { return commandExecutor.writeAsync(getName(), codec, RedisCommands.LTRIM, getName(), fromIndex, toIndex); } @@ -619,13 +620,177 @@ public class RedissonList extends RedissonExpirable implements RList { } @Override - public Integer addAfter(V elementToFind, V element) { + public int addAfter(V elementToFind, V element) { return get(addAfterAsync(elementToFind, element)); } @Override - public Integer addBefore(V elementToFind, V element) { + public int addBefore(V elementToFind, V element) { return get(addBeforeAsync(elementToFind, element)); } + @Override + public List readSort(SortOrder order) { + return get(readSortAsync(order)); + } + + @Override + public RFuture> readSortAsync(SortOrder order) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_LIST, getName(), order); + } + + @Override + public List readSort(SortOrder order, int offset, int count) { + return get(readSortAsync(order, offset, count)); + } + + @Override + public RFuture> readSortAsync(SortOrder order, int offset, int count) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_LIST, getName(), "LIMIT", offset, count, order); + } + + @Override + public List readSort(String byPattern, SortOrder order) { + return get(readSortAsync(byPattern, order)); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_LIST, getName(), "BY", byPattern, order); + } + + @Override + public List readSort(String byPattern, SortOrder order, int offset, int count) { + return get(readSortAsync(byPattern, order, offset, count)); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order, int offset, int count) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_LIST, getName(), "BY", byPattern, "LIMIT", offset, count, order); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order) { + return (Collection)get(readSortAsync(byPattern, getPatterns, order)); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order) { + return readSortAsync(byPattern, getPatterns, order, -1, -1); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return (Collection)get(readSortAsync(byPattern, getPatterns, order, offset, count)); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + List params = new ArrayList(); + params.add(getName()); + if (byPattern != null) { + params.add("BY"); + params.add(byPattern); + } + if (offset != -1 && count != -1) { + params.add("LIMIT"); + } + if (offset != -1) { + params.add(offset); + } + if (count != -1) { + params.add(count); + } + for (String pattern : getPatterns) { + params.add("GET"); + params.add(pattern); + } + params.add(order); + + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_LIST, params.toArray()); + } + + @Override + public int sortTo(String destName, SortOrder order) { + return get(sortToAsync(destName, order)); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order) { + return sortToAsync(destName, null, Collections.emptyList(), order, -1, -1); + } + + @Override + public int sortTo(String destName, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, order, offset, count)); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order, int offset, int count) { + return sortToAsync(destName, null, Collections.emptyList(), order, offset, count); + } + + @Override + public int sortTo(String destName, String byPattern, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, byPattern, order, offset, count)); + } + + @Override + public int sortTo(String destName, String byPattern, SortOrder order) { + return get(sortToAsync(destName, byPattern, order)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, SortOrder order) { + return sortToAsync(destName, byPattern, Collections.emptyList(), order, -1, -1); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, SortOrder order, int offset, int count) { + return sortToAsync(destName, byPattern, Collections.emptyList(), order, offset, count); + } + + @Override + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order) { + return get(sortToAsync(destName, byPattern, getPatterns, order)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order) { + return sortToAsync(destName, byPattern, getPatterns, order, -1, -1); + } + + @Override + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, byPattern, getPatterns, order, offset, count)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count) { + List params = new ArrayList(); + params.add(getName()); + if (byPattern != null) { + params.add("BY"); + params.add(byPattern); + } + if (offset != -1 && count != -1) { + params.add("LIMIT"); + } + if (offset != -1) { + params.add(offset); + } + if (count != -1) { + params.add(count); + } + for (String pattern : getPatterns) { + params.add("GET"); + params.add(pattern); + } + params.add(order); + params.add("STORE"); + params.add(destName); + + return commandExecutor.writeAsync(getName(), codec, RedisCommands.SORT_TO, params.toArray()); + } + } diff --git a/redisson/src/main/java/org/redisson/RedissonListMultimap.java b/redisson/src/main/java/org/redisson/RedissonListMultimap.java index 766377f7f..3cf59986e 100644 --- a/redisson/src/main/java/org/redisson/RedissonListMultimap.java +++ b/redisson/src/main/java/org/redisson/RedissonListMultimap.java @@ -23,11 +23,13 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Map.Entry; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; import org.redisson.api.RList; import org.redisson.api.RListMultimap; +import org.redisson.api.RedissonClient; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.RedisStrictCommand; @@ -44,12 +46,12 @@ public class RedissonListMultimap extends RedissonMultimap implement private static final RedisStrictCommand LLEN_VALUE = new RedisStrictCommand("LLEN", new BooleanAmountReplayConvertor()); - RedissonListMultimap(CommandAsyncExecutor connectionManager, String name) { - super(connectionManager, name); + RedissonListMultimap(UUID id, CommandAsyncExecutor connectionManager, String name) { + super(id, connectionManager, name); } - RedissonListMultimap(Codec codec, CommandAsyncExecutor connectionManager, String name) { - super(codec, connectionManager, name); + RedissonListMultimap(UUID id, Codec codec, CommandAsyncExecutor connectionManager, String name) { + super(id, codec, connectionManager, name); } @Override diff --git a/redisson/src/main/java/org/redisson/RedissonListMultimapCache.java b/redisson/src/main/java/org/redisson/RedissonListMultimapCache.java index 3f2184d66..b65ead340 100644 --- a/redisson/src/main/java/org/redisson/RedissonListMultimapCache.java +++ b/redisson/src/main/java/org/redisson/RedissonListMultimapCache.java @@ -17,6 +17,7 @@ package org.redisson; import java.util.Arrays; import java.util.Collection; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; @@ -25,6 +26,7 @@ import org.redisson.api.RListMultimapCache; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommands; import org.redisson.command.CommandAsyncExecutor; +import org.redisson.eviction.EvictionScheduler; /** * @author Nikita Koksharov @@ -36,14 +38,14 @@ public class RedissonListMultimapCache extends RedissonListMultimap private final RedissonMultimapCache baseCache; - RedissonListMultimapCache(EvictionScheduler evictionScheduler, CommandAsyncExecutor connectionManager, String name) { - super(connectionManager, name); + RedissonListMultimapCache(UUID id, EvictionScheduler evictionScheduler, CommandAsyncExecutor connectionManager, String name) { + super(id, connectionManager, name); evictionScheduler.scheduleCleanMultimap(name, getTimeoutSetName()); baseCache = new RedissonMultimapCache(connectionManager, name, codec, getTimeoutSetName()); } - RedissonListMultimapCache(EvictionScheduler evictionScheduler, Codec codec, CommandAsyncExecutor connectionManager, String name) { - super(codec, connectionManager, name); + RedissonListMultimapCache(UUID id, EvictionScheduler evictionScheduler, Codec codec, CommandAsyncExecutor connectionManager, String name) { + super(id, codec, connectionManager, name); evictionScheduler.scheduleCleanMultimap(name, getTimeoutSetName()); baseCache = new RedissonMultimapCache(connectionManager, name, codec, getTimeoutSetName()); } diff --git a/redisson/src/main/java/org/redisson/RedissonListMultimapValues.java b/redisson/src/main/java/org/redisson/RedissonListMultimapValues.java index bfd9c2c30..ada36e130 100644 --- a/redisson/src/main/java/org/redisson/RedissonListMultimapValues.java +++ b/redisson/src/main/java/org/redisson/RedissonListMultimapValues.java @@ -15,11 +15,6 @@ */ package org.redisson; -import static org.redisson.client.protocol.RedisCommands.EVAL_OBJECT; -import static org.redisson.client.protocol.RedisCommands.LPOP; -import static org.redisson.client.protocol.RedisCommands.LPUSH_BOOLEAN; -import static org.redisson.client.protocol.RedisCommands.RPUSH_BOOLEAN; - import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; @@ -34,6 +29,7 @@ import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; import org.redisson.api.RList; +import org.redisson.api.SortOrder; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; @@ -64,6 +60,7 @@ public class RedissonListMultimapValues extends RedissonExpirable implements public static final RedisCommand EVAL_BOOLEAN_ARGS2 = new RedisCommand("EVAL", new BooleanReplayConvertor(), 5, ValueType.OBJECTS); + private final RList list; private final Object key; private final String timeoutSetName; @@ -71,6 +68,7 @@ public class RedissonListMultimapValues extends RedissonExpirable implements super(codec, commandExecutor, name); this.timeoutSetName = timeoutSetName; this.key = key; + this.list = new RedissonList(codec, commandExecutor, name); } @Override @@ -189,12 +187,12 @@ public class RedissonListMultimapValues extends RedissonExpirable implements @Override public boolean add(V e) { - return get(addAsync(e)); + return list.add(e); } @Override public RFuture addAsync(V e) { - return commandExecutor.writeAsync(getName(), codec, RPUSH_BOOLEAN, getName(), e); + return list.addAsync(e); } @Override @@ -265,62 +263,22 @@ public class RedissonListMultimapValues extends RedissonExpirable implements @Override public boolean addAll(Collection c) { - return get(addAllAsync(c)); + return list.addAll(c); } @Override public RFuture addAllAsync(final Collection c) { - if (c.isEmpty()) { - return newSucceededFuture(false); - } - - List args = new ArrayList(c.size() + 1); - args.add(getName()); - args.addAll(c); - return commandExecutor.writeAsync(getName(), codec, RPUSH_BOOLEAN, args.toArray()); + return list.addAllAsync(c); } + @Override public RFuture addAllAsync(int index, Collection coll) { - if (index < 0) { - throw new IndexOutOfBoundsException("index: " + index); - } - - if (coll.isEmpty()) { - return newSucceededFuture(false); - } - - if (index == 0) { // prepend elements to list - List elements = new ArrayList(coll); - Collections.reverse(elements); - elements.add(0, getName()); - - return commandExecutor.writeAsync(getName(), codec, LPUSH_BOOLEAN, elements.toArray()); - } - - List args = new ArrayList(coll.size() + 1); - args.add(index); - args.addAll(coll); - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_BOOLEAN_ARGS2, - "local ind = table.remove(ARGV, 1); " + // index is the first parameter - "local size = redis.call('llen', KEYS[1]); " + - "assert(tonumber(ind) <= size, 'index: ' .. ind .. ' but current size: ' .. size); " + - "local tail = redis.call('lrange', KEYS[1], ind, -1); " + - "redis.call('ltrim', KEYS[1], 0, ind - 1); " + - "for i=1, #ARGV, 5000 do " - + "redis.call('rpush', KEYS[1], unpack(ARGV, i, math.min(i+4999, #ARGV))); " - + "end " + - "if #tail > 0 then " + - "for i=1, #tail, 5000 do " - + "redis.call('rpush', KEYS[1], unpack(tail, i, math.min(i+4999, #tail))); " - + "end " - + "end;" + - "return 1;", - Collections.singletonList(getName()), args.toArray()); + return list.addAllAsync(index, coll); } @Override public boolean addAll(int index, Collection coll) { - return get(addAllAsync(index, coll)); + return list.addAll(index, coll); } @Override @@ -451,27 +409,22 @@ public class RedissonListMultimapValues extends RedissonExpirable implements @Override public V set(int index, V element) { - checkIndex(index); - return get(setAsync(index, element)); + return list.set(index, element); } @Override public RFuture setAsync(int index, V element) { - return commandExecutor.evalWriteAsync(getName(), codec, new RedisCommand("EVAL", 5), - "local v = redis.call('lindex', KEYS[1], ARGV[1]); " + - "redis.call('lset', KEYS[1], ARGV[1], ARGV[2]); " + - "return v", - Collections.singletonList(getName()), index, element); + return list.setAsync(index, element); } @Override public void fastSet(int index, V element) { - get(fastSetAsync(index, element)); + list.fastSet(index, element); } @Override public RFuture fastSetAsync(int index, V element) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.LSET, getName(), index, element); + return list.fastSetAsync(index, element); } @Override @@ -481,34 +434,22 @@ public class RedissonListMultimapValues extends RedissonExpirable implements @Override public V remove(int index) { - return get(removeAsync(index)); + return list.remove(index); } @Override public RFuture removeAsync(long index) { - if (index == 0) { - return commandExecutor.writeAsync(getName(), codec, LPOP, getName()); - } - - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_OBJECT, - "local v = redis.call('lindex', KEYS[1], ARGV[1]); " + - "redis.call('lset', KEYS[1], ARGV[1], 'DELETED_BY_REDISSON');" + - "redis.call('lrem', KEYS[1], 1, 'DELETED_BY_REDISSON');" + - "return v", - Collections.singletonList(getName()), index); + return list.removeAsync(index); } @Override public void fastRemove(int index) { - get(fastRemoveAsync((long)index)); + list.fastRemove(index); } @Override public RFuture fastRemoveAsync(long index) { - return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_VOID, - "redis.call('lset', KEYS[1], ARGV[1], 'DELETED_BY_REDISSON');" + - "redis.call('lrem', KEYS[1], 1, 'DELETED_BY_REDISSON');", - Collections.singletonList(getName()), index); + return list.fastRemoveAsync(index); } @Override @@ -576,12 +517,12 @@ public class RedissonListMultimapValues extends RedissonExpirable implements @Override public void trim(int fromIndex, int toIndex) { - get(trimAsync(fromIndex, toIndex)); + list.trim(fromIndex, toIndex); } @Override - public RFuture trimAsync(long fromIndex, long toIndex) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.LTRIM, getName(), fromIndex, toIndex); + public RFuture trimAsync(int fromIndex, int toIndex) { + return list.trimAsync(fromIndex, toIndex); } @Override @@ -699,6 +640,7 @@ public class RedissonListMultimapValues extends RedissonExpirable implements return new RedissonSubList(codec, commandExecutor, getName(), fromIndex, toIndex); } + @Override public String toString() { Iterator it = iterator(); if (! it.hasNext()) @@ -744,22 +686,135 @@ public class RedissonListMultimapValues extends RedissonExpirable implements @Override public RFuture addAfterAsync(V elementToFind, V element) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.LINSERT, getName(), "AFTER", elementToFind, element); + return list.addAfterAsync(elementToFind, element); } @Override public RFuture addBeforeAsync(V elementToFind, V element) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.LINSERT, getName(), "BEFORE", elementToFind, element); + return list.addBeforeAsync(elementToFind, element); + } + + @Override + public int addAfter(V elementToFind, V element) { + return list.addAfter(elementToFind, element); + } + + @Override + public int addBefore(V elementToFind, V element) { + return list.addBefore(elementToFind, element); + } + + @Override + public RFuture> readSortAsync(SortOrder order) { + return list.readSortAsync(order); } @Override - public Integer addAfter(V elementToFind, V element) { - return get(addAfterAsync(elementToFind, element)); + public List readSort(SortOrder order) { + return list.readSort(order); } @Override - public Integer addBefore(V elementToFind, V element) { - return get(addBeforeAsync(elementToFind, element)); + public RFuture> readSortAsync(SortOrder order, int offset, int count) { + return list.readSortAsync(order, offset, count); } + @Override + public List readSort(SortOrder order, int offset, int count) { + return list.readSort(order, offset, count); + } + + @Override + public List readSort(String byPattern, SortOrder order, int offset, int count) { + return list.readSort(byPattern, order, offset, count); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order, int offset, int count) { + return list.readSortAsync(byPattern, order, offset, count); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return list.readSort(byPattern, getPatterns, order, offset, count); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order, int offset, + int count) { + return list.readSortAsync(byPattern, getPatterns, order, offset, count); + } + + @Override + public int sortTo(String destName, SortOrder order) { + return list.sortTo(destName, order); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order) { + return list.sortToAsync(destName, order); + } + + public List readSort(String byPattern, SortOrder order) { + return list.readSort(byPattern, order); + } + + public RFuture> readSortAsync(String byPattern, SortOrder order) { + return list.readSortAsync(byPattern, order); + } + + public Collection readSort(String byPattern, List getPatterns, SortOrder order) { + return list.readSort(byPattern, getPatterns, order); + } + + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order) { + return list.readSortAsync(byPattern, getPatterns, order); + } + + public int sortTo(String destName, SortOrder order, int offset, int count) { + return list.sortTo(destName, order, offset, count); + } + + public int sortTo(String destName, String byPattern, SortOrder order) { + return list.sortTo(destName, byPattern, order); + } + + public RFuture sortToAsync(String destName, SortOrder order, int offset, int count) { + return list.sortToAsync(destName, order, offset, count); + } + + public int sortTo(String destName, String byPattern, SortOrder order, int offset, int count) { + return list.sortTo(destName, byPattern, order, offset, count); + } + + public RFuture sortToAsync(String destName, String byPattern, SortOrder order) { + return list.sortToAsync(destName, byPattern, order); + } + + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order) { + return list.sortTo(destName, byPattern, getPatterns, order); + } + + public RFuture sortToAsync(String destName, String byPattern, SortOrder order, int offset, + int count) { + return list.sortToAsync(destName, byPattern, order, offset, count); + } + + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order, int offset, + int count) { + return list.sortTo(destName, byPattern, getPatterns, order, offset, count); + } + + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, + SortOrder order) { + return list.sortToAsync(destName, byPattern, getPatterns, order); + } + + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, + SortOrder order, int offset, int count) { + return list.sortToAsync(destName, byPattern, getPatterns, order, offset, count); + } + + + } diff --git a/redisson/src/main/java/org/redisson/RedissonLiveObjectService.java b/redisson/src/main/java/org/redisson/RedissonLiveObjectService.java index c6e129b05..92a83e3d0 100644 --- a/redisson/src/main/java/org/redisson/RedissonLiveObjectService.java +++ b/redisson/src/main/java/org/redisson/RedissonLiveObjectService.java @@ -110,23 +110,6 @@ public class RedissonLiveObjectService implements RLiveObjectService { return ClassUtils.getField(proxied, "liveObjectLiveMap"); } - @Override - public T create(Class entityClass) { - validateClass(entityClass); - try { - Class proxyClass = getProxyClass(entityClass); - Object id = generateId(entityClass); - T proxied = instantiateLiveObject(proxyClass, id); - if (!getMap(proxied).fastPut("redisson_live_object", "1")) { - throw new IllegalArgumentException("Object already exists"); - } - return proxied; - } catch (Exception ex) { - unregisterClass(entityClass); - throw ex instanceof RuntimeException ? (RuntimeException) ex : new RuntimeException(ex); - } - } - private Object generateId(Class entityClass) throws NoSuchFieldException { String idFieldName = getRIdFieldName(entityClass); RId annotation = entityClass @@ -149,18 +132,6 @@ public class RedissonLiveObjectService implements RLiveObjectService { } } - @Override - public T getOrCreate(Class entityClass, K id) { - try { - T proxied = instantiateLiveObject(getProxyClass(entityClass), id); - getMap(proxied).fastPut("redisson_live_object", "1"); - return proxied; - } catch (Exception ex) { - unregisterClass(entityClass); - throw ex instanceof RuntimeException ? (RuntimeException) ex : new RuntimeException(ex); - } - } - @Override public T attach(T detachedObject) { validateDetached(detachedObject); @@ -588,7 +559,7 @@ public class RedissonLiveObjectService implements RLiveObjectService { private T instantiate(Class cls, K id) throws Exception { for (Constructor constructor : cls.getDeclaredConstructors()) { - if (constructor.getParameterCount() == 0) { + if (constructor.getParameterTypes().length == 0) { constructor.setAccessible(true); return (T) constructor.newInstance(); } diff --git a/redisson/src/main/java/org/redisson/RedissonLocalCachedMap.java b/redisson/src/main/java/org/redisson/RedissonLocalCachedMap.java index b452837d4..14ae25699 100644 --- a/redisson/src/main/java/org/redisson/RedissonLocalCachedMap.java +++ b/redisson/src/main/java/org/redisson/RedissonLocalCachedMap.java @@ -15,6 +15,7 @@ */ package org.redisson; +import java.io.Serializable; import java.math.BigDecimal; import java.util.AbstractCollection; import java.util.AbstractMap; @@ -29,13 +30,13 @@ import java.util.List; import java.util.Map; import java.util.NoSuchElementException; import java.util.Set; +import java.util.UUID; import org.redisson.api.LocalCachedMapOptions; import org.redisson.api.LocalCachedMapOptions.EvictionPolicy; import org.redisson.api.RFuture; import org.redisson.api.RLocalCachedMap; import org.redisson.api.RTopic; -import org.redisson.api.RedissonClient; import org.redisson.api.listener.MessageListener; import org.redisson.client.codec.Codec; import org.redisson.client.codec.LongCodec; @@ -65,11 +66,11 @@ import io.netty.util.internal.ThreadLocalRandom; */ public class RedissonLocalCachedMap extends RedissonMap implements RLocalCachedMap { - public static class LocalCachedMapClear { + public static class LocalCachedMapClear implements Serializable { } - public static class LocalCachedMapInvalidate { + public static class LocalCachedMapInvalidate implements Serializable { private byte[] excludedId; private byte[] keyHash; @@ -93,7 +94,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R } - public static class CacheKey { + public static class CacheKey implements Serializable { private final byte[] keyHash; @@ -135,7 +136,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R } - public static class CacheValue { + public static class CacheValue implements Serializable { private final Object key; private final Object value; @@ -183,24 +184,24 @@ public class RedissonLocalCachedMap extends RedissonMap implements R private static final RedisCommand EVAL_PUT = new RedisCommand("EVAL", -1, ValueType.OBJECT, ValueType.MAP_VALUE); private static final RedisCommand EVAL_REMOVE = new RedisCommand("EVAL", -1, ValueType.OBJECT, ValueType.MAP_VALUE); - private byte[] id; + private byte[] instanceId; private RTopic invalidationTopic; private Cache cache; private int invalidateEntryOnChange; private int invalidationListenerId; - protected RedissonLocalCachedMap(RedissonClient redisson, CommandAsyncExecutor commandExecutor, String name, LocalCachedMapOptions options) { - super(commandExecutor, name); - init(redisson, name, options); + protected RedissonLocalCachedMap(UUID id, CommandAsyncExecutor commandExecutor, String name, LocalCachedMapOptions options) { + super(id, commandExecutor, name); + init(id, name, options); } - protected RedissonLocalCachedMap(RedissonClient redisson, Codec codec, CommandAsyncExecutor connectionManager, String name, LocalCachedMapOptions options) { - super(codec, connectionManager, name); - init(redisson, name, options); + protected RedissonLocalCachedMap(UUID id, Codec codec, CommandAsyncExecutor connectionManager, String name, LocalCachedMapOptions options) { + super(id, codec, connectionManager, name); + init(id, name, options); } - private void init(RedissonClient redisson, String name, LocalCachedMapOptions options) { - id = generateId(); + private void init(UUID id, String name, LocalCachedMapOptions options) { + instanceId = generateId(); if (options.isInvalidateEntryOnChange()) { invalidateEntryOnChange = 1; @@ -215,7 +216,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R cache = new LFUCacheMap(options.getCacheSize(), options.getTimeToLiveInMillis(), options.getMaxIdleInMillis()); } - invalidationTopic = redisson.getTopic(name + ":topic"); + invalidationTopic = new RedissonTopic(commandExecutor, suffixName(name, "topic")); if (options.isInvalidateEntryOnChange()) { invalidationListenerId = invalidationTopic.addListener(new MessageListener() { @Override @@ -225,7 +226,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R } if (msg instanceof LocalCachedMapInvalidate) { LocalCachedMapInvalidate invalidateMsg = (LocalCachedMapInvalidate)msg; - if (!Arrays.equals(invalidateMsg.getExcludedId(), id)) { + if (!Arrays.equals(invalidateMsg.getExcludedId(), instanceId)) { CacheKey key = new CacheKey(invalidateMsg.getKeyHash()); cache.remove(key); } @@ -309,7 +310,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R byte[] mapKey = encodeMapKey(key); CacheKey cacheKey = toCacheKey(mapKey); - byte[] msg = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msg = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); CacheValue cacheValue = new CacheValue(key, value); cache.put(cacheKey, cacheValue); return commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT, @@ -334,7 +335,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R byte[] encodedKey = encodeMapKey(key); byte[] encodedValue = encodeMapKey(value); CacheKey cacheKey = toCacheKey(encodedKey); - byte[] msg = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msg = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); CacheValue cacheValue = new CacheValue(key, value); cache.put(cacheKey, cacheValue); return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_BOOLEAN, @@ -364,7 +365,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R byte[] keyEncoded = encodeMapKey(key); CacheKey cacheKey = toCacheKey(keyEncoded); - byte[] msgEncoded = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msgEncoded = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); cache.remove(cacheKey); return commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE, "local v = redis.call('hget', KEYS[1], ARGV[1]); " @@ -391,7 +392,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R CacheKey cacheKey = toCacheKey(keyEncoded); cache.remove(cacheKey); if (invalidateEntryOnChange == 1) { - byte[] msgEncoded = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msgEncoded = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); params.add(msgEncoded); } else { params.add(null); @@ -414,24 +415,6 @@ public class RedissonLocalCachedMap extends RedissonMap implements R } - @Override - public void putAll(Map m) { - Map cacheMap = new HashMap(m.size()); - for (java.util.Map.Entry entry : m.entrySet()) { - CacheKey cacheKey = toCacheKey(entry.getKey()); - CacheValue cacheValue = new CacheValue(entry.getKey(), entry.getValue()); - cacheMap.put(cacheKey, cacheValue); - } - cache.putAll(cacheMap); - super.putAll(m); - - if (invalidateEntryOnChange == 1) { - for (CacheKey cacheKey : cacheMap.keySet()) { - invalidationTopic.publish(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); - } - } - } - @Override public RFuture deleteAsync() { cache.clear(); @@ -748,11 +731,12 @@ public class RedissonLocalCachedMap extends RedissonMap implements R params.add(mapKey); params.add(mapValue); CacheKey cacheKey = toCacheKey(mapKey); - byte[] msgEncoded = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msgEncoded = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); msgs.add(msgEncoded); } params.addAll(msgs); + final RPromise result = newPromise(); RFuture future = commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_VOID, "redis.call('hmset', KEYS[1], unpack(ARGV, 3, tonumber(ARGV[2]) + 2));" + "if ARGV[1] == '1' then " @@ -770,16 +754,17 @@ public class RedissonLocalCachedMap extends RedissonMap implements R } cacheMap(map); + result.trySuccess(null); } }); - return future; + return result; } @Override public RFuture addAndGetAsync(final K key, Number value) { final byte[] keyState = encodeMapKey(key); CacheKey cacheKey = toCacheKey(keyState); - byte[] msg = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msg = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); RFuture future = commandExecutor.evalWriteAsync(getName(), StringCodec.INSTANCE, new RedisCommand("EVAL", new NumberConvertor(value.getClass())), "local result = redis.call('HINCRBYFLOAT', KEYS[1], ARGV[1], ARGV[2]); " @@ -928,7 +913,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R final byte[] keyState = encodeMapKey(key); byte[] valueState = encodeMapValue(value); final CacheKey cacheKey = toCacheKey(keyState); - byte[] msg = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msg = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); RFuture future = commandExecutor.evalWriteAsync(getName(key), codec, RedisCommands.EVAL_MAP_VALUE, "if redis.call('hexists', KEYS[1], ARGV[1]) == 1 then " @@ -967,7 +952,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R byte[] oldValueState = encodeMapValue(oldValue); byte[] newValueState = encodeMapValue(newValue); final CacheKey cacheKey = toCacheKey(keyState); - byte[] msg = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msg = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); RFuture future = commandExecutor.evalWriteAsync(getName(key), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, "if redis.call('hget', KEYS[1], ARGV[1]) == ARGV[2] then " @@ -1003,7 +988,7 @@ public class RedissonLocalCachedMap extends RedissonMap implements R final byte[] keyState = encodeMapKey(key); byte[] valueState = encodeMapValue(value); final CacheKey cacheKey = toCacheKey(keyState); - byte[] msg = encode(new LocalCachedMapInvalidate(id, cacheKey.getKeyHash())); + byte[] msg = encode(new LocalCachedMapInvalidate(instanceId, cacheKey.getKeyHash())); RFuture future = commandExecutor.evalWriteAsync(getName(key), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, "if redis.call('hget', KEYS[1], ARGV[1]) == ARGV[2] then " diff --git a/redisson/src/main/java/org/redisson/RedissonLock.java b/redisson/src/main/java/org/redisson/RedissonLock.java index f175fea7e..2cdb38dd1 100644 --- a/redisson/src/main/java/org/redisson/RedissonLock.java +++ b/redisson/src/main/java/org/redisson/RedissonLock.java @@ -77,10 +77,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { } String getChannelName() { - if (getName().contains("{")) { - return "redisson_lock__channel:" + getName(); - } - return "redisson_lock__channel__{" + getName() + "}"; + return prefixName("redisson_lock__channel", getName()); } String getLockName(long threadId) { @@ -113,19 +110,19 @@ public class RedissonLock extends RedissonExpirable implements RLock { @Override public void lockInterruptibly(long leaseTime, TimeUnit unit) throws InterruptedException { - Long ttl = tryAcquire(leaseTime, unit); + long threadId = Thread.currentThread().getId(); + Long ttl = tryAcquire(leaseTime, unit, threadId); // lock acquired if (ttl == null) { return; } - long threadId = Thread.currentThread().getId(); RFuture future = subscribe(threadId); - get(future); + commandExecutor.syncSubscription(future); try { while (true) { - ttl = tryAcquire(leaseTime, unit); + ttl = tryAcquire(leaseTime, unit, threadId); // lock acquired if (ttl == null) { break; @@ -144,8 +141,8 @@ public class RedissonLock extends RedissonExpirable implements RLock { // get(lockAsync(leaseTime, unit)); } - private Long tryAcquire(long leaseTime, TimeUnit unit) { - return get(tryAcquireAsync(leaseTime, unit, Thread.currentThread().getId())); + private Long tryAcquire(long leaseTime, TimeUnit unit, long threadId) { + return get(tryAcquireAsync(leaseTime, unit, threadId)); } private RFuture tryAcquireOnceAsync(long leaseTime, TimeUnit unit, final long threadId) { @@ -261,13 +258,21 @@ public class RedissonLock extends RedissonExpirable implements RLock { "return redis.call('pttl', KEYS[1]);", Collections.singletonList(getName()), internalLockLeaseTime, getLockName(threadId)); } + + private void acquireFailed(long threadId) { + get(acquireFailedAsync(threadId)); + } + + protected RFuture acquireFailedAsync(long threadId) { + return newSucceededFuture(null); + } @Override public boolean tryLock(long waitTime, long leaseTime, TimeUnit unit) throws InterruptedException { long time = unit.toMillis(waitTime); long current = System.currentTimeMillis(); final long threadId = Thread.currentThread().getId(); - Long ttl = tryAcquire(leaseTime, unit); + Long ttl = tryAcquire(leaseTime, unit, threadId); // lock acquired if (ttl == null) { return true; @@ -275,6 +280,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { time -= (System.currentTimeMillis() - current); if (time <= 0) { + acquireFailed(threadId); return false; } @@ -291,18 +297,20 @@ public class RedissonLock extends RedissonExpirable implements RLock { } }); } + acquireFailed(threadId); return false; } try { time -= (System.currentTimeMillis() - current); if (time <= 0) { + acquireFailed(threadId); return false; } while (true) { long currentTime = System.currentTimeMillis(); - ttl = tryAcquire(leaseTime, unit); + ttl = tryAcquire(leaseTime, unit, threadId); // lock acquired if (ttl == null) { return true; @@ -310,6 +318,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { time -= (System.currentTimeMillis() - currentTime); if (time <= 0) { + acquireFailed(threadId); return false; } @@ -323,6 +332,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { time -= (System.currentTimeMillis() - currentTime); if (time <= 0) { + acquireFailed(threadId); return false; } } @@ -351,25 +361,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { @Override public void unlock() { - Boolean opStatus = commandExecutor.evalWrite(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, - "if (redis.call('exists', KEYS[1]) == 0) then " + - "redis.call('publish', KEYS[2], ARGV[1]); " + - "return 1; " + - "end;" + - "if (redis.call('hexists', KEYS[1], ARGV[3]) == 0) then " + - "return nil;" + - "end; " + - "local counter = redis.call('hincrby', KEYS[1], ARGV[3], -1); " + - "if (counter > 0) then " + - "redis.call('pexpire', KEYS[1], ARGV[2]); " + - "return 0; " + - "else " + - "redis.call('del', KEYS[1]); " + - "redis.call('publish', KEYS[2], ARGV[1]); " + - "return 1; "+ - "end; " + - "return nil;", - Arrays.asList(getName(), getChannelName()), LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(Thread.currentThread().getId())); + Boolean opStatus = get(unlockInnerAsync(Thread.currentThread().getId())); if (opStatus == null) { throw new IllegalMonitorStateException("attempt to unlock lock, not locked by current thread by node id: " + id + " thread-id: " + Thread.currentThread().getId()); @@ -418,6 +410,11 @@ public class RedissonLock extends RedissonExpirable implements RLock { return isExists(); } + @Override + public RFuture isExistsAsync() { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.EXISTS, getName()); + } + @Override public boolean isHeldByCurrentThread() { return commandExecutor.write(getName(), LongCodec.INSTANCE, RedisCommands.HEXISTS, getName(), getLockName(Thread.currentThread().getId())); @@ -442,27 +439,32 @@ public class RedissonLock extends RedissonExpirable implements RLock { return unlockAsync(threadId); } + protected RFuture unlockInnerAsync(long threadId) { + return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, + "if (redis.call('exists', KEYS[1]) == 0) then " + + "redis.call('publish', KEYS[2], ARGV[1]); " + + "return 1; " + + "end;" + + "if (redis.call('hexists', KEYS[1], ARGV[3]) == 0) then " + + "return nil;" + + "end; " + + "local counter = redis.call('hincrby', KEYS[1], ARGV[3], -1); " + + "if (counter > 0) then " + + "redis.call('pexpire', KEYS[1], ARGV[2]); " + + "return 0; " + + "else " + + "redis.call('del', KEYS[1]); " + + "redis.call('publish', KEYS[2], ARGV[1]); " + + "return 1; "+ + "end; " + + "return nil;", + Arrays.asList(getName(), getChannelName()), LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(threadId)); + + } + public RFuture unlockAsync(final long threadId) { final RPromise result = newPromise(); - RFuture future = commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, - "if (redis.call('exists', KEYS[1]) == 0) then " + - "redis.call('publish', KEYS[2], ARGV[1]); " + - "return 1; " + - "end;" + - "if (redis.call('hexists', KEYS[1], ARGV[3]) == 0) then " + - "return nil;" + - "end; " + - "local counter = redis.call('hincrby', KEYS[1], ARGV[3], -1); " + - "if (counter > 0) then " + - "redis.call('pexpire', KEYS[1], ARGV[2]); " + - "return 0; " + - "else " + - "redis.call('del', KEYS[1]); " + - "redis.call('publish', KEYS[2], ARGV[1]); " + - "return 1; "+ - "end; " + - "return nil;", - Arrays.asList(getName(), getChannelName()), LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(threadId)); + RFuture future = unlockInnerAsync(threadId); future.addListener(new FutureListener() { @Override @@ -647,7 +649,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { time.addAndGet(-elapsed); if (time.get() <= 0) { - result.trySuccess(false); + trySuccessFalse(currentThreadId, result); return; } @@ -669,12 +671,6 @@ public class RedissonLock extends RedissonExpirable implements RLock { long elapsed = System.currentTimeMillis() - current; time.addAndGet(-elapsed); - if (time.get() <= 0) { - unsubscribe(subscribeFuture, currentThreadId); - result.trySuccess(false); - return; - } - tryLockAsync(time, leaseTime, unit, subscribeFuture, result, currentThreadId); } }); @@ -684,19 +680,33 @@ public class RedissonLock extends RedissonExpirable implements RLock { public void run(Timeout timeout) throws Exception { if (!subscribeFuture.isDone()) { subscribeFuture.cancel(false); - result.trySuccess(false); + trySuccessFalse(currentThreadId, result); } } }, time.get(), TimeUnit.MILLISECONDS); futureRef.set(scheduledFuture); } } + }); return result; } + private void trySuccessFalse(final long currentThreadId, final RPromise result) { + acquireFailedAsync(currentThreadId).addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (future.isSuccess()) { + result.trySuccess(false); + } else { + result.tryFailure(future.cause()); + } + } + }); + } + private void tryLockAsync(final AtomicLong time, final long leaseTime, final TimeUnit unit, final RFuture subscribeFuture, final RPromise result, final long currentThreadId) { if (result.isDone()) { @@ -706,7 +716,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { if (time.get() <= 0) { unsubscribe(subscribeFuture, currentThreadId); - result.trySuccess(false); + trySuccessFalse(currentThreadId, result); return; } @@ -736,7 +746,7 @@ public class RedissonLock extends RedissonExpirable implements RLock { if (time.get() <= 0) { unsubscribe(subscribeFuture, currentThreadId); - result.trySuccess(false); + trySuccessFalse(currentThreadId, result); return; } @@ -793,3 +803,4 @@ public class RedissonLock extends RedissonExpirable implements RLock { } +; \ No newline at end of file diff --git a/redisson/src/main/java/org/redisson/RedissonMap.java b/redisson/src/main/java/org/redisson/RedissonMap.java index ee74b50e6..14db82119 100644 --- a/redisson/src/main/java/org/redisson/RedissonMap.java +++ b/redisson/src/main/java/org/redisson/RedissonMap.java @@ -28,11 +28,13 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.UUID; import org.redisson.api.RFuture; +import org.redisson.api.RLock; import org.redisson.api.RMap; import org.redisson.client.codec.Codec; -import org.redisson.client.codec.ScanCodec; +import org.redisson.client.codec.MapScanCodec; import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; @@ -42,7 +44,9 @@ import org.redisson.client.protocol.convertor.NumberConvertor; import org.redisson.client.protocol.decoder.MapScanResult; import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; +import org.redisson.command.CommandExecutor; import org.redisson.connection.decoder.MapGetAllDecoder; +import org.redisson.misc.Hash; /** * Distributed and concurrent implementation of {@link java.util.concurrent.ConcurrentMap} @@ -61,14 +65,33 @@ public class RedissonMap extends RedissonExpirable implements RMap { static final RedisCommand EVAL_REMOVE_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 4, ValueType.MAP); static final RedisCommand EVAL_PUT = EVAL_REPLACE; - protected RedissonMap(CommandAsyncExecutor commandExecutor, String name) { + private final UUID id; + + protected RedissonMap(UUID id, CommandAsyncExecutor commandExecutor, String name) { super(commandExecutor, name); + this.id = id; } - public RedissonMap(Codec codec, CommandAsyncExecutor commandExecutor, String name) { + public RedissonMap(UUID id, Codec codec, CommandAsyncExecutor commandExecutor, String name) { super(codec, commandExecutor, name); + this.id = id; } + @Override + public RLock getLock(K key) { + String lockName = getLockName(key); + return new RedissonLock((CommandExecutor)commandExecutor, lockName, id); + } + + private String getLockName(Object key) { + try { + byte[] keyState = codec.getMapKeyEncoder().encode(key); + return "{" + getName() + "}:" + Hash.hashToBase64(keyState) + ":key"; + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + @Override public int size() { return get(sizeAsync()); @@ -86,6 +109,10 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture valueSizeAsync(K key) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + return commandExecutor.readAsync(getName(), codec, RedisCommands.HSTRLEN, getName(key), key); } @@ -101,6 +128,10 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture containsKeyAsync(Object key) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + return commandExecutor.readAsync(getName(key), codec, RedisCommands.HEXISTS, getName(key), key); } @@ -111,6 +142,10 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture containsValueAsync(Object value) { + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.evalReadAsync(getName(), codec, new RedisCommand("EVAL", new BooleanReplayConvertor(), 4), "local s = redis.call('hvals', KEYS[1]);" + "for i = 1, #s, 1 do " @@ -232,6 +267,13 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture putIfAbsentAsync(K key, V value) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT, "if redis.call('hsetnx', KEYS[1], ARGV[1], ARGV[2]) == 1 then " + "return nil " @@ -248,6 +290,13 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture fastPutIfAbsentAsync(K key, V value) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.writeAsync(getName(key), codec, RedisCommands.HSETNX, getName(key), key, value); } @@ -258,6 +307,13 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture removeAsync(Object key, Object value) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REMOVE_VALUE, "if redis.call('hget', KEYS[1], ARGV[1]) == ARGV[2] then " + "return redis.call('hdel', KEYS[1], ARGV[1]) " @@ -274,6 +330,17 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture replaceAsync(K key, V oldValue, V newValue) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (oldValue == null) { + throw new NullPointerException("map oldValue can't be null"); + } + if (newValue == null) { + throw new NullPointerException("map newValue can't be null"); + } + + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REPLACE_VALUE, "if redis.call('hget', KEYS[1], ARGV[1]) == ARGV[2] then " + "redis.call('hset', KEYS[1], ARGV[1], ARGV[3]); " @@ -291,6 +358,13 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture replaceAsync(K key, V value) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REPLACE, "if redis.call('hexists', KEYS[1], ARGV[1]) == 1 then " + "local v = redis.call('hget', KEYS[1], ARGV[1]); " @@ -304,6 +378,10 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture getAsync(K key) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + return commandExecutor.readAsync(getName(key), codec, RedisCommands.HGET, getName(key), key); } @@ -313,6 +391,13 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture putAsync(K key, V value) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT, "local v = redis.call('hget', KEYS[1], ARGV[1]); " + "redis.call('hset', KEYS[1], ARGV[1], ARGV[2]); " @@ -323,6 +408,10 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture removeAsync(K key) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REMOVE, "local v = redis.call('hget', KEYS[1], ARGV[1]); " + "redis.call('hdel', KEYS[1], ARGV[1]); " @@ -332,6 +421,13 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture fastPutAsync(K key, V value) { + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); + } + return commandExecutor.writeAsync(getName(key), codec, RedisCommands.HSET, getName(key), key, value); } @@ -359,7 +455,7 @@ public class RedissonMap extends RedissonExpirable implements RMap { MapScanResult scanIterator(String name, InetSocketAddress client, long startPos) { RFuture> f - = commandExecutor.readAsync(client, name, new ScanCodec(codec), RedisCommands.HSCAN, name, startPos); + = commandExecutor.readAsync(client, name, new MapScanCodec(codec), RedisCommands.HSCAN, name, startPos); return get(f); } @@ -370,14 +466,17 @@ public class RedissonMap extends RedissonExpirable implements RMap { @Override public RFuture addAndGetAsync(K key, Number value) { - try { - byte[] keyState = codec.getMapKeyEncoder().encode(key); - return commandExecutor.writeAsync(getName(key), StringCodec.INSTANCE, - new RedisCommand("HINCRBYFLOAT", new NumberConvertor(value.getClass())), - getName(key), keyState, new BigDecimal(value.toString()).toPlainString()); - } catch (IOException e) { - throw new IllegalArgumentException(e); + if (key == null) { + throw new NullPointerException("map key can't be null"); + } + if (value == null) { + throw new NullPointerException("map value can't be null"); } + + byte[] keyState = encodeMapKey(key); + return commandExecutor.writeAsync(getName(key), StringCodec.INSTANCE, + new RedisCommand("HINCRBYFLOAT", new NumberConvertor(value.getClass())), + getName(key), keyState, new BigDecimal(value.toString()).toPlainString()); } @Override @@ -426,7 +525,7 @@ public class RedissonMap extends RedissonExpirable implements RMap { protected Iterator keyIterator() { return new RedissonMapIterator(RedissonMap.this) { @Override - K getValue(java.util.Map.Entry entry) { + protected K getValue(java.util.Map.Entry entry) { return (K) entry.getKey().getObj(); } }; @@ -464,7 +563,7 @@ public class RedissonMap extends RedissonExpirable implements RMap { protected Iterator valueIterator() { return new RedissonMapIterator(RedissonMap.this) { @Override - V getValue(java.util.Map.Entry entry) { + protected V getValue(java.util.Map.Entry entry) { return (V) entry.getValue().getObj(); } }; diff --git a/redisson/src/main/java/org/redisson/RedissonMapCache.java b/redisson/src/main/java/org/redisson/RedissonMapCache.java index 4f4c172f7..b0005bca3 100644 --- a/redisson/src/main/java/org/redisson/RedissonMapCache.java +++ b/redisson/src/main/java/org/redisson/RedissonMapCache.java @@ -23,13 +23,14 @@ import java.util.Collections; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; import org.redisson.api.RMapCache; import org.redisson.client.codec.Codec; import org.redisson.client.codec.LongCodec; -import org.redisson.client.codec.ScanCodec; +import org.redisson.client.codec.MapScanCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.RedisCommands; @@ -45,6 +46,7 @@ import org.redisson.client.protocol.decoder.ObjectMapDecoder; import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; import org.redisson.connection.decoder.MapGetAllDecoder; +import org.redisson.eviction.EvictionScheduler; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; @@ -58,7 +60,7 @@ import io.netty.util.concurrent.FutureListener; * Thus entries are checked for TTL expiration during any key/value/entry read operation. * If key/value/entry expired then it doesn't returns and clean task runs asynchronous. * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * In addition there is {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* *

If eviction is not required then it's better to use {@link org.redisson.RedissonMap} object.

@@ -70,32 +72,41 @@ import io.netty.util.concurrent.FutureListener; */ public class RedissonMapCache extends RedissonMap implements RMapCache { + static final RedisCommand EVAL_PUT_IF_ABSENT = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, ValueType.MAP); static final RedisCommand EVAL_HSET = new RedisCommand("EVAL", new BooleanReplayConvertor(), 4, ValueType.MAP); static final RedisCommand EVAL_REPLACE = new RedisCommand("EVAL", 6, ValueType.MAP, ValueType.MAP_VALUE); static final RedisCommand EVAL_REPLACE_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, Arrays.asList(ValueType.MAP_KEY, ValueType.MAP_VALUE, ValueType.MAP_VALUE)); - private static final RedisCommand EVAL_HMSET = new RedisCommand("EVAL", new VoidReplayConvertor(), 4, ValueType.MAP); + static final RedisCommand EVAL_HMSET = new RedisCommand("EVAL", new VoidReplayConvertor(), 4, ValueType.MAP); private static final RedisCommand EVAL_REMOVE = new RedisCommand("EVAL", 4, ValueType.MAP_KEY, ValueType.MAP_VALUE); private static final RedisCommand EVAL_REMOVE_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 5, ValueType.MAP); private static final RedisCommand EVAL_PUT_TTL = new RedisCommand("EVAL", 9, ValueType.MAP, ValueType.MAP_VALUE); private static final RedisCommand EVAL_FAST_PUT_TTL = new RedisCommand("EVAL", new BooleanReplayConvertor(), 9, ValueType.MAP, ValueType.MAP_VALUE); private static final RedisCommand EVAL_GET_TTL = new RedisCommand("EVAL", 7, ValueType.MAP_KEY, ValueType.MAP_VALUE); private static final RedisCommand EVAL_CONTAINS_KEY = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, ValueType.MAP_KEY); - private static final RedisCommand EVAL_CONTAINS_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, ValueType.MAP_VALUE); - private static final RedisCommand EVAL_FAST_REMOVE = new RedisCommand("EVAL", 5, ValueType.MAP_KEY); + static final RedisCommand EVAL_CONTAINS_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, ValueType.MAP_VALUE); + static final RedisCommand EVAL_FAST_REMOVE = new RedisCommand("EVAL", 5, ValueType.MAP_KEY); - protected RedissonMapCache(EvictionScheduler evictionScheduler, CommandAsyncExecutor commandExecutor, String name) { - super(commandExecutor, name); + RedissonMapCache(UUID id, CommandAsyncExecutor commandExecutor, String name) { + super(id, commandExecutor, name); + } + + RedissonMapCache(UUID id, Codec codec, CommandAsyncExecutor commandExecutor, String name) { + super(id, codec, commandExecutor, name); + } + + public RedissonMapCache(UUID id, EvictionScheduler evictionScheduler, CommandAsyncExecutor commandExecutor, String name) { + super(id, commandExecutor, name); evictionScheduler.schedule(getName(), getTimeoutSetName(), getIdleSetName()); } - public RedissonMapCache(Codec codec, EvictionScheduler evictionScheduler, CommandAsyncExecutor commandExecutor, String name) { - super(codec, commandExecutor, name); + public RedissonMapCache(UUID id, Codec codec, EvictionScheduler evictionScheduler, CommandAsyncExecutor commandExecutor, String name) { + super(id, codec, commandExecutor, name); evictionScheduler.schedule(getName(), getTimeoutSetName(), getIdleSetName()); } @Override public RFuture containsKeyAsync(Object key) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_CONTAINS_KEY, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_CONTAINS_KEY, "local value = redis.call('hget', KEYS[1], ARGV[2]); " + "local expireDate = 92233720368547758; " + "if value ~= false then " + @@ -121,7 +132,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "return 1;" + "end;" + "return 0; ", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), System.currentTimeMillis(), key); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), System.currentTimeMillis(), key); } @Override @@ -256,7 +267,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac maxIdleTimeout = System.currentTimeMillis() + maxIdleDelta; } - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT_TTL, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT_TTL, "if redis.call('hexists', KEYS[1], ARGV[4]) == 0 then " + "if tonumber(ARGV[1]) > 0 then " + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " @@ -275,12 +286,12 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "local t, val = struct.unpack('dLc0', value); " + "return val; " + "end", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), ttlTimeout, maxIdleTimeout, maxIdleDelta, key, value); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), ttlTimeout, maxIdleTimeout, maxIdleDelta, key, value); } @Override public RFuture removeAsync(Object key, Object value) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE_VALUE, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REMOVE_VALUE, "local value = redis.call('hget', KEYS[1], ARGV[1]); " + "if value == false then " + "return 0; " @@ -293,12 +304,12 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "else " + "return 0 " + "end", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), key, value); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), key, value); } @Override public RFuture getAsync(K key) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_GET_TTL, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_GET_TTL, "local value = redis.call('hget', KEYS[1], ARGV[2]); " + "if value == false then " + "return nil; " @@ -324,7 +335,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "return nil; " + "end; " + "return val; ", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), System.currentTimeMillis(), key); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), System.currentTimeMillis(), key); } @Override @@ -334,7 +345,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac @Override public RFuture putAsync(K key, V value) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT, "local v = redis.call('hget', KEYS[1], ARGV[1]); " + "local value = struct.pack('dLc0', 0, string.len(ARGV[2]), ARGV[2]); " + "redis.call('hset', KEYS[1], ARGV[1], value); " @@ -343,12 +354,12 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "end; " + "local t, val = struct.unpack('dLc0', v); " + "return val; ", - Collections.singletonList(getName()), key, value); + Collections.singletonList(getName(key)), key, value); } @Override public RFuture putIfAbsentAsync(K key, V value) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT, "local value = struct.pack('dLc0', 0, string.len(ARGV[2]), ARGV[2]); " + "if redis.call('hsetnx', KEYS[1], ARGV[1], value) == 1 then " + "return nil " @@ -360,7 +371,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "local t, val = struct.unpack('dLc0', v); " + "return val; " + "end", - Collections.singletonList(getName()), key, value); + Collections.singletonList(getName(key)), key, value); } @Override @@ -410,7 +421,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac maxIdleTimeout = System.currentTimeMillis() + maxIdleDelta; } - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_FAST_PUT_TTL, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_FAST_PUT_TTL, "if tonumber(ARGV[1]) > 0 then " + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " + "else " @@ -423,7 +434,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "end; " + "local value = struct.pack('dLc0', ARGV[3], string.len(ARGV[5]), ARGV[5]); " + "return redis.call('hset', KEYS[1], ARGV[4], value); ", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), ttlTimeout, maxIdleTimeout, maxIdleDelta, key, value); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), ttlTimeout, maxIdleTimeout, maxIdleDelta, key, value); } @Override @@ -468,7 +479,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac maxIdleTimeout = System.currentTimeMillis() + maxIdleDelta; } - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT_TTL, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT_TTL, "local v = redis.call('hget', KEYS[1], ARGV[4]); " + "if tonumber(ARGV[1]) > 0 then " + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " @@ -487,20 +498,37 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "end; " + "local t, val = struct.unpack('dLc0', v); " + "return val", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), ttlTimeout, maxIdleTimeout, maxIdleDelta, key, value); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), ttlTimeout, maxIdleTimeout, maxIdleDelta, key, value); } + String getTimeoutSetNameByKey(Object key) { + return prefixName("redisson__timeout__set", getName(key)); + } + + String getTimeoutSetName(String name) { + return prefixName("redisson__timeout__set", name); + } + String getTimeoutSetName() { - return "redisson__timeout__set__{" + getName() + "}"; + return prefixName("redisson__timeout__set", getName()); } + String getIdleSetNameByKey(Object key) { + return prefixName("redisson__idle__set", getName(key)); + } + + String getIdleSetName(String name) { + return prefixName("redisson__idle__set", name); + } + String getIdleSetName() { - return "redisson__idle__set__{" + getName() + "}"; + return prefixName("redisson__idle__set", getName()); } + @Override public RFuture removeAsync(K key) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REMOVE, "local v = redis.call('hget', KEYS[1], ARGV[1]); " + "redis.call('zrem', KEYS[2], ARGV[1]); " + "redis.call('zrem', KEYS[3], ARGV[1]); " @@ -510,7 +538,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "return val; " + "end; " + "return v", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), key); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), key); } @Override @@ -528,9 +556,13 @@ public class RedissonMapCache extends RedissonMap implements RMapCac @Override MapScanResult scanIterator(String name, InetSocketAddress client, long startPos) { + return get(scanIteratorAsync(name, client, startPos)); + } + + public RFuture> scanIteratorAsync(final String name, InetSocketAddress client, long startPos) { RedisCommand> EVAL_HSCAN = new RedisCommand>("EVAL", - new ListMultiDecoder(new LongMultiDecoder(), new ObjectMapDecoder(new ScanCodec(codec)), new ObjectListDecoder(codec), new MapCacheScanResultReplayDecoder()), ValueType.MAP); - RFuture> f = commandExecutor.evalReadAsync(client, getName(), codec, EVAL_HSCAN, + new ListMultiDecoder(new LongMultiDecoder(), new ObjectMapDecoder(new MapScanCodec(codec)), new ObjectListDecoder(codec), new MapCacheScanResultReplayDecoder()), ValueType.MAP); + RFuture> f = commandExecutor.evalReadAsync(client, name, codec, EVAL_HSCAN, "local result = {}; " + "local idleKeys = {}; " + "local res = redis.call('hscan', KEYS[1], ARGV[2]); " @@ -561,7 +593,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "end; " + "end; " + "end;" - + "return {res[1], result, idleKeys};", Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), System.currentTimeMillis(), startPos); + + "return {res[1], result, idleKeys};", Arrays.asList(name, getTimeoutSetName(name), getIdleSetName(name)), System.currentTimeMillis(), startPos); f.addListener(new FutureListener>() { @Override @@ -577,7 +609,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac args.add(System.currentTimeMillis()); args.addAll(res.getIdleKeys()); - commandExecutor.evalWriteAsync(getName(), codec, new RedisCommand>("EVAL", new MapGetAllDecoder(args, 1), 7, ValueType.MAP_KEY, ValueType.MAP_VALUE), + commandExecutor.evalWriteAsync(name, codec, new RedisCommand>("EVAL", new MapGetAllDecoder(args, 1), 7, ValueType.MAP_KEY, ValueType.MAP_VALUE), "local currentTime = tonumber(table.remove(ARGV, 1)); " // index is the first parameter + "local map = redis.call('hmget', KEYS[1], unpack(ARGV)); " + "for i = #map, 1, -1 do " @@ -598,34 +630,65 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "end; " + "end; " + "end; ", - Arrays.asList(getName(), getIdleSetName()), args.toArray()); + Arrays.asList(name, getIdleSetName(name)), args.toArray()); } } }); - return get(f); + return (RFuture>)(Object)f; } + @Override public RFuture fastPutAsync(K key, V value) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_HSET, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_HSET, "local val = struct.pack('dLc0', 0, string.len(ARGV[2]), ARGV[2]); " + "return redis.call('hset', KEYS[1], ARGV[1], val); ", - Collections.singletonList(getName()), key, value); + Collections.singletonList(getName(key)), key, value); } @Override public RFuture fastPutIfAbsentAsync(K key, V value) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_HSET, - "local val = struct.pack('dLc0', 0, string.len(ARGV[2]), ARGV[2]); " - + "return redis.call('hsetnx', KEYS[1], ARGV[1], val); ", - Collections.singletonList(getName()), key, value); + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_PUT_IF_ABSENT, + "local value = redis.call('hget', KEYS[1], ARGV[2]); " + + "if value == false then " + + "local val = struct.pack('dLc0', 0, string.len(ARGV[3]), ARGV[3]); " + + "redis.call('hset', KEYS[1], ARGV[2], val); " + + "return 1; " + + "end; " + + "local t, val = struct.unpack('dLc0', value); " + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[2]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore) " + + "end; " + + "if t ~= 0 then " + + "local expireIdle = redis.call('zscore', KEYS[3], ARGV[2]); " + + "if expireIdle ~= false then " + + "if tonumber(expireIdle) > tonumber(ARGV[1]) then " + + "local value = struct.pack('dLc0', t, string.len(val), val); " + + "redis.call('hset', KEYS[1], ARGV[2], value); " + + "redis.call('zadd', KEYS[3], t + tonumber(ARGV[1]), ARGV[2]); " + + "end; " + + "expireDate = math.min(expireDate, tonumber(expireIdle)) " + + "end; " + + "end; " + + "if expireDate > tonumber(ARGV[1]) then " + + "return 0; " + + "end; " + + + "redis.call('zrem', KEYS[2], ARGV[2]); " + + "redis.call('zrem', KEYS[3], ARGV[2]); " + + "local val = struct.pack('dLc0', 0, string.len(ARGV[3]), ARGV[3]); " + + "redis.call('hset', KEYS[1], ARGV[2], val); " + + "return 1; ", + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), System.currentTimeMillis(), key, value); } @Override public RFuture replaceAsync(K key, V oldValue, V newValue) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_REPLACE_VALUE, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REPLACE_VALUE, "local v = redis.call('hget', KEYS[1], ARGV[2]); " + "if v == false then " + "return 0;" @@ -654,12 +717,12 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "return 1; " + "end; " + "return 0; ", - Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), System.currentTimeMillis(), key, oldValue, newValue); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key), getIdleSetNameByKey(key)), System.currentTimeMillis(), key, oldValue, newValue); } @Override public RFuture replaceAsync(K key, V value) { - return commandExecutor.evalWriteAsync(getName(), codec, EVAL_REPLACE, + return commandExecutor.evalWriteAsync(getName(key), codec, EVAL_REPLACE, "local v = redis.call('hget', KEYS[1], ARGV[2]); " + "if v ~= false then " + "local t, val = struct.unpack('dLc0', v); " @@ -672,7 +735,7 @@ public class RedissonMapCache extends RedissonMap implements RMapCac + "else " + "return nil; " + "end", - Arrays.asList(getName(), getTimeoutSetName()), System.currentTimeMillis(), key, value); + Arrays.asList(getName(key), getTimeoutSetNameByKey(key)), System.currentTimeMillis(), key, value); } @Override @@ -736,6 +799,40 @@ public class RedissonMapCache extends RedissonMap implements RMapCac Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName())); } + @Override + public RFuture> readAllKeySetAsync() { + return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_MAP_KEY_SET, + "local s = redis.call('hgetall', KEYS[1]); " + + "local result = {}; " + + "for i, v in ipairs(s) do " + + "if i % 2 == 0 then " + + "local t, val = struct.unpack('dLc0', v); " + + "local key = s[i-1];" + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], key); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore) " + + "end; " + + "if t ~= 0 then " + + "local expireIdle = redis.call('zscore', KEYS[3], key); " + + "if expireIdle ~= false then " + + "if tonumber(expireIdle) > tonumber(ARGV[1]) then " + + "local value = struct.pack('dLc0', t, string.len(val), val); " + + "redis.call('hset', KEYS[1], key, value); " + + "redis.call('zadd', KEYS[3], t + tonumber(ARGV[1]), key); " + + "end; " + + "expireDate = math.min(expireDate, tonumber(expireIdle)) " + + "end; " + + "end; " + + "if expireDate > tonumber(ARGV[1]) then " + + "table.insert(result, key); " + + "end; " + + "end; " + + "end;" + + "return result;", + Arrays.asList(getName(), getTimeoutSetName(), getIdleSetName()), System.currentTimeMillis()); + } + @Override public RFuture>> readAllEntrySetAsync() { return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_MAP_ENTRY, diff --git a/redisson/src/main/java/org/redisson/RedissonMultimap.java b/redisson/src/main/java/org/redisson/RedissonMultimap.java index eae974e07..ace7843a1 100644 --- a/redisson/src/main/java/org/redisson/RedissonMultimap.java +++ b/redisson/src/main/java/org/redisson/RedissonMultimap.java @@ -15,6 +15,7 @@ */ package org.redisson; +import java.io.IOException; import java.net.InetSocketAddress; import java.util.AbstractCollection; import java.util.AbstractSet; @@ -26,19 +27,22 @@ import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; +import org.redisson.api.RLock; import org.redisson.api.RMultimap; import org.redisson.client.codec.Codec; import org.redisson.client.codec.LongCodec; -import org.redisson.client.codec.ScanCodec; +import org.redisson.client.codec.MapScanCodec; import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.decoder.MapScanResult; import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; +import org.redisson.command.CommandExecutor; import org.redisson.misc.Hash; /** @@ -49,14 +53,33 @@ import org.redisson.misc.Hash; */ public abstract class RedissonMultimap extends RedissonExpirable implements RMultimap { - RedissonMultimap(CommandAsyncExecutor connectionManager, String name) { + private final UUID id; + + RedissonMultimap(UUID id, CommandAsyncExecutor connectionManager, String name) { super(connectionManager, name); + this.id = id; } - RedissonMultimap(Codec codec, CommandAsyncExecutor connectionManager, String name) { + RedissonMultimap(UUID id, Codec codec, CommandAsyncExecutor connectionManager, String name) { super(codec, connectionManager, name); + this.id = id; } + @Override + public RLock getLock(K key) { + String lockName = getLockName(key); + return new RedissonLock((CommandExecutor)commandExecutor, lockName, id); + } + + private String getLockName(Object key) { + try { + byte[] keyState = codec.getMapKeyEncoder().encode(key); + return "{" + getName() + "}:" + Hash.hashToBase64(keyState) + ":key"; + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + protected String hash(byte[] objectState) { return Hash.hashToBase64(objectState); } @@ -249,7 +272,7 @@ public abstract class RedissonMultimap extends RedissonExpirable implement MapScanResult scanIterator(InetSocketAddress client, long startPos) { - RFuture> f = commandExecutor.readAsync(client, getName(), new ScanCodec(codec, StringCodec.INSTANCE), RedisCommands.HSCAN, getName(), startPos); + RFuture> f = commandExecutor.readAsync(client, getName(), new MapScanCodec(codec, StringCodec.INSTANCE), RedisCommands.HSCAN, getName(), startPos); return get(f); } @@ -263,7 +286,7 @@ public abstract class RedissonMultimap extends RedissonExpirable implement public Iterator iterator() { return new RedissonMultiMapKeysIterator(RedissonMultimap.this) { @Override - K getValue(java.util.Map.Entry entry) { + protected K getValue(java.util.Map.Entry entry) { return (K) entry.getKey().getObj(); } }; diff --git a/redisson/src/main/java/org/redisson/RedissonNode.java b/redisson/src/main/java/org/redisson/RedissonNode.java index cb941788b..52e30a141 100644 --- a/redisson/src/main/java/org/redisson/RedissonNode.java +++ b/redisson/src/main/java/org/redisson/RedissonNode.java @@ -23,6 +23,7 @@ import java.util.Map.Entry; import org.redisson.api.RFuture; import org.redisson.api.RedissonClient; import org.redisson.client.RedisConnection; +import org.redisson.client.protocol.RedisCommands; import org.redisson.config.RedissonNodeConfig; import org.redisson.connection.ConnectionManager; import org.redisson.connection.MasterSlaveEntry; @@ -146,7 +147,7 @@ public class RedissonNode { private void retrieveAdresses() { ConnectionManager connectionManager = ((Redisson)redisson).getConnectionManager(); for (MasterSlaveEntry entry : connectionManager.getEntrySet()) { - RFuture readFuture = entry.connectionReadOp(); + RFuture readFuture = entry.connectionReadOp(null); if (readFuture.awaitUninterruptibly((long)connectionManager.getConfig().getConnectTimeout()) && readFuture.isSuccess()) { RedisConnection connection = readFuture.getNow(); @@ -155,7 +156,7 @@ public class RedissonNode { localAddress = (InetSocketAddress) connection.getChannel().localAddress(); return; } - RFuture writeFuture = entry.connectionWriteOp(); + RFuture writeFuture = entry.connectionWriteOp(null); if (writeFuture.awaitUninterruptibly((long)connectionManager.getConfig().getConnectTimeout()) && writeFuture.isSuccess()) { RedisConnection connection = writeFuture.getNow(); diff --git a/redisson/src/main/java/org/redisson/RedissonObject.java b/redisson/src/main/java/org/redisson/RedissonObject.java index ce077a0d0..79915fedf 100644 --- a/redisson/src/main/java/org/redisson/RedissonObject.java +++ b/redisson/src/main/java/org/redisson/RedissonObject.java @@ -31,11 +31,11 @@ import org.redisson.misc.RPromise; * @author Nikita Koksharov * */ -abstract class RedissonObject implements RObject { +public abstract class RedissonObject implements RObject { - final CommandAsyncExecutor commandExecutor; + protected final CommandAsyncExecutor commandExecutor; private final String name; - final Codec codec; + protected final Codec codec; public RedissonObject(Codec codec, CommandAsyncExecutor commandExecutor, String name) { this.codec = codec; @@ -51,6 +51,20 @@ abstract class RedissonObject implements RObject { return commandExecutor.await(future, timeout, timeoutUnit); } + protected String prefixName(String prefix, String name) { + if (name.contains("{")) { + return prefix + ":" + name; + } + return prefix + ":{" + name + "}"; + } + + protected String suffixName(String name, String suffix) { + if (name.contains("{")) { + return name + ":" + suffix; + } + return "{" + name + "}:" + suffix; + } + protected V get(RFuture future) { return commandExecutor.get(future); } @@ -67,6 +81,10 @@ abstract class RedissonObject implements RObject { public String getName() { return name; } + + protected String getName(Object o) { + return getName(); + } @Override public void rename(String newName) { diff --git a/redisson/src/main/java/org/redisson/RedissonPatternTopic.java b/redisson/src/main/java/org/redisson/RedissonPatternTopic.java index b99b8438d..cc965006b 100644 --- a/redisson/src/main/java/org/redisson/RedissonPatternTopic.java +++ b/redisson/src/main/java/org/redisson/RedissonPatternTopic.java @@ -64,7 +64,7 @@ public class RedissonPatternTopic implements RPatternTopic { private int addListener(RedisPubSubListener pubSubListener) { RFuture future = commandExecutor.getConnectionManager().psubscribe(name, codec, pubSubListener); - future.syncUninterruptibly(); + commandExecutor.syncSubscription(future); return System.identityHashCode(pubSubListener); } @@ -86,7 +86,46 @@ public class RedissonPatternTopic implements RPatternTopic { semaphore.release(); } } + + @Override + public void removeAllListeners() { + AsyncSemaphore semaphore = commandExecutor.getConnectionManager().getSemaphore(name); + semaphore.acquireUninterruptibly(); + + PubSubConnectionEntry entry = commandExecutor.getConnectionManager().getPubSubEntry(name); + if (entry == null) { + semaphore.release(); + return; + } + entry.removeAllListeners(name); + if (!entry.hasListeners(name)) { + commandExecutor.getConnectionManager().punsubscribe(name, semaphore); + } else { + semaphore.release(); + } + } + + @Override + public void removeListener(PatternMessageListener listener) { + AsyncSemaphore semaphore = commandExecutor.getConnectionManager().getSemaphore(name); + semaphore.acquireUninterruptibly(); + + PubSubConnectionEntry entry = commandExecutor.getConnectionManager().getPubSubEntry(name); + if (entry == null) { + semaphore.release(); + return; + } + + entry.removeListener(name, listener); + if (!entry.hasListeners(name)) { + commandExecutor.getConnectionManager().punsubscribe(name, semaphore); + } else { + semaphore.release(); + } + + } + @Override public List getPatternNames() { return Collections.singletonList(name); diff --git a/redisson/src/main/java/org/redisson/RedissonPermitExpirableSemaphore.java b/redisson/src/main/java/org/redisson/RedissonPermitExpirableSemaphore.java index 384366440..5038c16cd 100644 --- a/redisson/src/main/java/org/redisson/RedissonPermitExpirableSemaphore.java +++ b/redisson/src/main/java/org/redisson/RedissonPermitExpirableSemaphore.java @@ -91,7 +91,7 @@ public class RedissonPermitExpirableSemaphore extends RedissonExpirable implemen } RFuture future = subscribe(); - get(future); + commandExecutor.syncSubscription(future); try { while (true) { final Long nearestTimeout; @@ -672,7 +672,8 @@ public class RedissonPermitExpirableSemaphore extends RedissonExpirable implemen "end;" + "return value; " + "end; " + - "return redis.call('get', KEYS[1]); ", + "local ret = redis.call('get', KEYS[1]); " + + "return ret == false and 0 or ret;", Arrays.asList(getName(), timeoutName, getChannelName()), System.currentTimeMillis()); } diff --git a/redisson/src/main/java/org/redisson/RedissonPriorityDeque.java b/redisson/src/main/java/org/redisson/RedissonPriorityDeque.java new file mode 100644 index 000000000..155b5ef10 --- /dev/null +++ b/redisson/src/main/java/org/redisson/RedissonPriorityDeque.java @@ -0,0 +1,243 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.util.Iterator; +import java.util.NoSuchElementException; + +import org.redisson.api.RFuture; +import org.redisson.api.RPriorityDeque; +import org.redisson.client.codec.Codec; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.client.protocol.RedisCommand.ValueType; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.client.protocol.convertor.VoidReplayConvertor; +import org.redisson.client.protocol.decoder.ListFirstObjectDecoder; +import org.redisson.command.CommandExecutor; + +/** + * Distributed and concurrent implementation of {@link java.util.Queue} + * + * @author Nikita Koksharov + * + * @param the type of elements held in this collection + */ +public class RedissonPriorityDeque extends RedissonPriorityQueue implements RPriorityDeque { + + private static final RedisCommand RPUSH_VOID = new RedisCommand("RPUSH", new VoidReplayConvertor(), 2, ValueType.OBJECTS); + private static final RedisCommand LRANGE_SINGLE = new RedisCommand("LRANGE", new ListFirstObjectDecoder()); + + + protected RedissonPriorityDeque(CommandExecutor commandExecutor, String name, Redisson redisson) { + super(commandExecutor, name, redisson); + } + + public RedissonPriorityDeque(Codec codec, CommandExecutor commandExecutor, String name, Redisson redisson) { + super(codec, commandExecutor, name, redisson); + } + + @Override + public void addFirst(V e) { + get(addFirstAsync(e)); + } + +// @Override + public RFuture addFirstAsync(V e) { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.LPUSH_VOID, getName(), e); + } + + @Override + public void addLast(V e) { + get(addLastAsync(e)); + } + +// @Override + public RFuture addLastAsync(V e) { + return commandExecutor.writeAsync(getName(), codec, RPUSH_VOID, getName(), e); + } + + + @Override + public Iterator descendingIterator() { + return new Iterator() { + + private int currentIndex = size(); + private boolean removeExecuted; + + @Override + public boolean hasNext() { + int size = size(); + return currentIndex > 0 && size > 0; + } + + @Override + public V next() { + if (!hasNext()) { + throw new NoSuchElementException("No such element at index " + currentIndex); + } + currentIndex--; + removeExecuted = false; + return RedissonPriorityDeque.this.get(currentIndex); + } + + @Override + public void remove() { + if (removeExecuted) { + throw new IllegalStateException("Element been already deleted"); + } + RedissonPriorityDeque.this.remove(currentIndex); + currentIndex++; + removeExecuted = true; + } + + }; + } + +// @Override + public RFuture getLastAsync() { + return commandExecutor.readAsync(getName(), codec, LRANGE_SINGLE, getName(), -1, -1); + } + + @Override + public V getLast() { + V result = get(getLastAsync()); + if (result == null) { + throw new NoSuchElementException(); + } + return result; + } + + @Override + public boolean offerFirst(V e) { + return get(offerFirstAsync(e)); + } + +// @Override + public RFuture offerFirstAsync(V e) { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.LPUSH_BOOLEAN, getName(), e); + } + +// @Override + public RFuture offerLastAsync(V e) { + return offerAsync(e); + } + + @Override + public boolean offerLast(V e) { + return get(offerLastAsync(e)); + } + +// @Override + public RFuture peekFirstAsync() { + return getAsync(0); + } + + @Override + public V peekFirst() { + return get(peekFirstAsync()); + } + +// @Override + public RFuture peekLastAsync() { + return getLastAsync(); + } + + @Override + public V peekLast() { + return get(getLastAsync()); + } + +// @Override + public RFuture pollFirstAsync() { + return pollAsync(); + } + + @Override + public V pollFirst() { + return poll(); + } + +// @Override + public RFuture pollLastAsync() { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.RPOP, getName()); + } + + + @Override + public V pollLast() { + return get(pollLastAsync()); + } + +// @Override + public RFuture popAsync() { + return pollAsync(); + } + + @Override + public V pop() { + return removeFirst(); + } + +// @Override + public RFuture pushAsync(V e) { + return addFirstAsync(e); + } + + @Override + public void push(V e) { + addFirst(e); + } + +// @Override + public RFuture removeFirstOccurrenceAsync(Object o) { + return removeAsync(o, 1); + } + + @Override + public boolean removeFirstOccurrence(Object o) { + return remove(o, 1); + } + +// @Override + public RFuture removeFirstAsync() { + return pollAsync(); + } + +// @Override + public RFuture removeLastAsync() { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.RPOP, getName()); + } + + @Override + public V removeLast() { + V value = get(removeLastAsync()); + if (value == null) { + throw new NoSuchElementException(); + } + return value; + } + +// @Override + public RFuture removeLastOccurrenceAsync(Object o) { + return removeAsync(o, -1); + } + + @Override + public boolean removeLastOccurrence(Object o) { + return remove(o, -1); + } + +} diff --git a/redisson/src/main/java/org/redisson/RedissonPriorityQueue.java b/redisson/src/main/java/org/redisson/RedissonPriorityQueue.java new file mode 100644 index 000000000..d5aa64722 --- /dev/null +++ b/redisson/src/main/java/org/redisson/RedissonPriorityQueue.java @@ -0,0 +1,414 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson; + +import java.io.ByteArrayOutputStream; +import java.io.ObjectOutputStream; +import java.io.Serializable; +import java.math.BigInteger; +import java.security.MessageDigest; +import java.util.Arrays; +import java.util.Collection; +import java.util.Comparator; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; + +import org.redisson.api.RBucket; +import org.redisson.api.RFuture; +import org.redisson.api.RLock; +import org.redisson.api.RPriorityQueue; +import org.redisson.client.codec.Codec; +import org.redisson.client.codec.StringCodec; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandExecutor; + +/** + * + * @author Nikita Koksharov + * + * @param value type + */ +public class RedissonPriorityQueue extends RedissonList implements RPriorityQueue { + + private static class NaturalComparator implements Comparator, Serializable { + + private static final long serialVersionUID = 7207038068494060240L; + + static final NaturalComparator NATURAL_ORDER = new NaturalComparator(); + + public int compare(V c1, V c2) { + Comparable c1co = (Comparable) c1; + Comparable c2co = (Comparable) c2; + return c1co.compareTo(c2co); + } + + } + + public static class BinarySearchResult { + + private V value; + private Integer index; + + public BinarySearchResult(V value) { + super(); + this.value = value; + } + + public BinarySearchResult() { + } + + public void setIndex(Integer index) { + this.index = index; + } + public Integer getIndex() { + return index; + } + + public V getValue() { + return value; + } + + + } + + private Comparator comparator = NaturalComparator.NATURAL_ORDER; + + CommandExecutor commandExecutor; + + private RLock lock; + private RBucket comparatorHolder; + + protected RedissonPriorityQueue(CommandExecutor commandExecutor, String name, Redisson redisson) { + super(commandExecutor, name); + this.commandExecutor = commandExecutor; + + comparatorHolder = redisson.getBucket(getComparatorKeyName(), StringCodec.INSTANCE); + lock = redisson.getLock("redisson_sortedset_lock:{" + getName() + "}"); + + loadComparator(); + } + + public RedissonPriorityQueue(Codec codec, CommandExecutor commandExecutor, String name, Redisson redisson) { + super(codec, commandExecutor, name); + this.commandExecutor = commandExecutor; + + comparatorHolder = redisson.getBucket(getComparatorKeyName(), StringCodec.INSTANCE); + lock = redisson.getLock("redisson_sortedset_lock:{" + getName() + "}"); + + loadComparator(); + } + + private void loadComparator() { + try { + String comparatorSign = comparatorHolder.get(); + if (comparatorSign != null) { + String[] parts = comparatorSign.split(":"); + String className = parts[0]; + String sign = parts[1]; + + String result = calcClassSign(className); + if (!result.equals(sign)) { + throw new IllegalStateException("Local class signature of " + className + " differs from used by this SortedSet!"); + } + + Class clazz = Class.forName(className); + comparator = (Comparator) clazz.newInstance(); + } + } catch (IllegalStateException e) { + throw e; + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + // TODO cache result + private static String calcClassSign(String name) { + try { + Class clazz = Class.forName(name); + + ByteArrayOutputStream result = new ByteArrayOutputStream(); + ObjectOutputStream outputStream = new ObjectOutputStream(result); + outputStream.writeObject(clazz); + outputStream.close(); + + MessageDigest crypt = MessageDigest.getInstance("SHA-1"); + crypt.reset(); + crypt.update(result.toByteArray()); + + return new BigInteger(1, crypt.digest()).toString(16); + } catch (Exception e) { + throw new IllegalStateException("Can't calculate sign of " + name, e); + } + } + + @Override + public List readAll() { + return get(readAllAsync()); + } + + @Override + public RFuture> readAllAsync() { + return commandExecutor.readAsync(getName(), codec, RedisCommands.LRANGE, getName(), 0, -1); + } + + @Override + public boolean offer(V e) { + return add(e); + } + +// @Override + public RFuture offerAsync(V e) { + return addAsync(e); + } + + @Override + public boolean contains(final Object o) { + return binarySearch((V)o, codec).getIndex() >= 0; + } + + @Override + public boolean add(V value) { + lock.lock(); + + try { + checkComparator(); + + BinarySearchResult res = binarySearch(value, codec); + int index = 0; + if (res.getIndex() < 0) { + index = -(res.getIndex() + 1); + } else { + index = res.getIndex() + 1; + } + + byte[] encodedValue = encode(value); + + commandExecutor.evalWrite(getName(), RedisCommands.EVAL_VOID, + "local len = redis.call('llen', KEYS[1]);" + + "if tonumber(ARGV[1]) < len then " + + "local pivot = redis.call('lindex', KEYS[1], ARGV[1]);" + + "redis.call('linsert', KEYS[1], 'before', pivot, ARGV[2]);" + + "return;" + + "end;" + + "redis.call('rpush', KEYS[1], ARGV[2]);", Arrays.asList(getName()), index, encodedValue); + return true; + } finally { + lock.unlock(); + } + } + + private void checkComparator() { + String comparatorSign = comparatorHolder.get(); + if (comparatorSign != null) { + String[] vals = comparatorSign.split(":"); + String className = vals[0]; + if (!comparator.getClass().getName().equals(className)) { + loadComparator(); + } + } + } + + @Override + public boolean remove(Object value) { + lock.lock(); + + try { + checkComparator(); + + BinarySearchResult res = binarySearch((V) value, codec); + if (res.getIndex() < 0) { + return false; + } + + remove((int)res.getIndex()); + return true; + } finally { + lock.unlock(); + } + } + + @Override + public boolean containsAll(Collection c) { + for (Object object : c) { + if (!contains(object)) { + return false; + } + } + return true; + } + + @Override + public boolean addAll(Collection c) { + boolean changed = false; + for (V v : c) { + if (add(v)) { + changed = true; + } + } + return changed; + } + + @Override + public boolean retainAll(Collection c) { + boolean changed = false; + for (Iterator iterator = iterator(); iterator.hasNext();) { + Object object = (Object) iterator.next(); + if (!c.contains(object)) { + iterator.remove(); + changed = true; + } + } + return changed; + } + + @Override + public boolean removeAll(Collection c) { + boolean changed = false; + for (Object obj : c) { + if (remove(obj)) { + changed = true; + } + } + return changed; + } + + @Override + public void clear() { + delete(); + } + + @Override + public Comparator comparator() { + return comparator; + } + +// @Override + public RFuture pollAsync() { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.LPOP, getName()); + } + + public V getFirst() { + V value = getValue(0); + if (value == null) { + throw new NoSuchElementException(); + } + return value; + } + + @Override + public V poll() { + return get(pollAsync()); + } + + @Override + public V element() { + return getFirst(); + } + +// @Override + public RFuture peekAsync() { + return getAsync(0); + } + + @Override + public V peek() { + return getValue(0); + } + + private String getComparatorKeyName() { + return "redisson_sortedset_comparator:{" + getName() + "}"; + } + + @Override + public boolean trySetComparator(Comparator comparator) { + String className = comparator.getClass().getName(); + final String comparatorSign = className + ":" + calcClassSign(className); + + Boolean res = commandExecutor.evalWrite(getName(), StringCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, + "if redis.call('llen', KEYS[1]) == 0 then " + + "redis.call('set', KEYS[2], ARGV[1]); " + + "return 1; " + + "else " + + "return 0; " + + "end", + Arrays.asList(getName(), getComparatorKeyName()), comparatorSign); + if (res) { + this.comparator = comparator; + } + return res; + } + + @Override + public V remove() { + return removeFirst(); + } + + public V removeFirst() { + V value = poll(); + if (value == null) { + throw new NoSuchElementException(); + } + return value; + } + + // TODO optimize: get three values each time instead of single + public BinarySearchResult binarySearch(V value, Codec codec) { + int size = size(); + int upperIndex = size - 1; + int lowerIndex = 0; + while (lowerIndex <= upperIndex) { + int index = lowerIndex + (upperIndex - lowerIndex) / 2; + + V res = getValue(index); + if (res == null) { + return new BinarySearchResult(); + } + int cmp = comparator.compare(value, res); + + if (cmp == 0) { + BinarySearchResult indexRes = new BinarySearchResult(); + indexRes.setIndex(index); + return indexRes; + } else if (cmp < 0) { + upperIndex = index - 1; + } else { + lowerIndex = index + 1; + } + } + + BinarySearchResult indexRes = new BinarySearchResult(); + indexRes.setIndex(-(lowerIndex + 1)); + return indexRes; + } + + public String toString() { + Iterator it = iterator(); + if (! it.hasNext()) + return "[]"; + + StringBuilder sb = new StringBuilder(); + sb.append('['); + for (;;) { + V e = it.next(); + sb.append(e == this ? "(this Collection)" : e); + if (! it.hasNext()) + return sb.append(']').toString(); + sb.append(',').append(' '); + } + } + +} diff --git a/redisson/src/main/java/org/redisson/RedissonQueue.java b/redisson/src/main/java/org/redisson/RedissonQueue.java index 82d58c060..c1cbba746 100644 --- a/redisson/src/main/java/org/redisson/RedissonQueue.java +++ b/redisson/src/main/java/org/redisson/RedissonQueue.java @@ -16,6 +16,7 @@ package org.redisson; import java.util.NoSuchElementException; +import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; import org.redisson.api.RQueue; @@ -65,7 +66,15 @@ public class RedissonQueue extends RedissonList implements RQueue { } return value; } - + + protected long toSeconds(long timeout, TimeUnit unit) { + long seconds = unit.toSeconds(timeout); + if (timeout != 0 && seconds == 0) { + seconds = 1; + } + return seconds; + } + @Override public V remove() { return removeFirst(); diff --git a/redisson/src/main/java/org/redisson/RedissonReactive.java b/redisson/src/main/java/org/redisson/RedissonReactive.java index 47176e68c..f1514d3f4 100644 --- a/redisson/src/main/java/org/redisson/RedissonReactive.java +++ b/redisson/src/main/java/org/redisson/RedissonReactive.java @@ -18,6 +18,7 @@ package org.redisson; import java.util.ArrayList; import java.util.Collection; import java.util.List; +import java.util.UUID; import org.redisson.api.ClusterNode; import org.redisson.api.Node; @@ -50,6 +51,7 @@ import org.redisson.command.CommandReactiveService; import org.redisson.config.Config; import org.redisson.config.ConfigSupport; import org.redisson.connection.ConnectionManager; +import org.redisson.eviction.EvictionScheduler; import org.redisson.reactive.RedissonAtomicLongReactive; import org.redisson.reactive.RedissonBatchReactive; import org.redisson.reactive.RedissonBitSetReactive; @@ -84,6 +86,7 @@ public class RedissonReactive implements RedissonReactiveClient { protected final ConnectionManager connectionManager; protected final Config config; protected final CodecProvider codecProvider; + protected final UUID id = UUID.randomUUID(); protected RedissonReactive(Config config) { this.config = config; @@ -98,12 +101,12 @@ public class RedissonReactive implements RedissonReactiveClient { @Override public RMapCacheReactive getMapCache(String name, Codec codec) { - return new RedissonMapCacheReactive(codec, evictionScheduler, commandExecutor, name); + return new RedissonMapCacheReactive(id, evictionScheduler, codec, commandExecutor, name); } @Override public RMapCacheReactive getMapCache(String name) { - return new RedissonMapCacheReactive(evictionScheduler, commandExecutor, name); + return new RedissonMapCacheReactive(id, evictionScheduler, commandExecutor, name); } @Override @@ -262,7 +265,7 @@ public class RedissonReactive implements RedissonReactiveClient { @Override public RBatchReactive createBatch() { - RedissonBatchReactive batch = new RedissonBatchReactive(evictionScheduler, connectionManager); + RedissonBatchReactive batch = new RedissonBatchReactive(id, evictionScheduler, connectionManager); if (config.isRedissonReferenceEnabled()) { batch.enableRedissonReferenceSupport(this); } diff --git a/redisson/src/main/java/org/redisson/RedissonReadLock.java b/redisson/src/main/java/org/redisson/RedissonReadLock.java index 2d6480a44..8fb3b1202 100644 --- a/redisson/src/main/java/org/redisson/RedissonReadLock.java +++ b/redisson/src/main/java/org/redisson/RedissonReadLock.java @@ -51,6 +51,10 @@ public class RedissonReadLock extends RedissonLock implements RLock { String getChannelName() { return "redisson_rwlock__{" + getName() + "}"; } + + String getWriteLockName(long threadId) { + return super.getLockName(threadId) + ":write"; + } @Override RFuture tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand command) { @@ -64,13 +68,13 @@ public class RedissonReadLock extends RedissonLock implements RLock { "redis.call('pexpire', KEYS[1], ARGV[1]); " + "return nil; " + "end; " + - "if (mode == 'read') then " + + "if (mode == 'read') or (mode == 'write' and redis.call('hexists', KEYS[1], ARGV[3]) == 1) then " + "redis.call('hincrby', KEYS[1], ARGV[2], 1); " + "redis.call('pexpire', KEYS[1], ARGV[1]); " + "return nil; " + "end;" + "return redis.call('pttl', KEYS[1]);", - Arrays.asList(getName()), internalLockLeaseTime, getLockName(threadId)); + Arrays.asList(getName()), internalLockLeaseTime, getLockName(threadId), getWriteLockName(threadId)); } @Override @@ -80,8 +84,8 @@ public class RedissonReadLock extends RedissonLock implements RLock { "if (mode == false) then " + "redis.call('publish', KEYS[2], ARGV[1]); " + "return 1; " + - "end; " - + "if (mode == 'read') then " + + "end; " + +// "if (mode == 'read') then " + "local lockExists = redis.call('hexists', KEYS[1], ARGV[3]); " + "if (lockExists == 0) then " + "return nil;" + @@ -99,7 +103,7 @@ public class RedissonReadLock extends RedissonLock implements RLock { "return 1; "+ "end; " + "end; " + - "end; " + +// "end; " + "return nil; ", Arrays.asList(getName(), getChannelName()), LockPubSub.unlockMessage, internalLockLeaseTime, getLockName(Thread.currentThread().getId())); if (opStatus == null) { @@ -146,18 +150,4 @@ public class RedissonReadLock extends RedissonLock implements RLock { return "read".equals(res); } - @Override - public boolean isHeldByCurrentThread() { - return commandExecutor.write(getName(), LongCodec.INSTANCE, RedisCommands.HEXISTS, getName(), getLockName(Thread.currentThread().getId())); - } - - @Override - public int getHoldCount() { - Long res = commandExecutor.write(getName(), LongCodec.INSTANCE, RedisCommands.HGET, getName(), getLockName(Thread.currentThread().getId())); - if (res == null) { - return 0; - } - return res.intValue(); - } - } diff --git a/redisson/src/main/java/org/redisson/RedissonRemoteService.java b/redisson/src/main/java/org/redisson/RedissonRemoteService.java index f1110c6f3..10c03290e 100644 --- a/redisson/src/main/java/org/redisson/RedissonRemoteService.java +++ b/redisson/src/main/java/org/redisson/RedissonRemoteService.java @@ -93,7 +93,7 @@ public class RedissonRemoteService extends BaseRemoteService implements RRemoteS } for (Method method : remoteInterface.getMethods()) { RemoteServiceMethod value = new RemoteServiceMethod(method, object); - RemoteServiceKey key = new RemoteServiceKey(remoteInterface, method.getName()); + RemoteServiceKey key = new RemoteServiceKey(remoteInterface, method.getName(), getMethodSignatures(method)); if (beans.put(key, value) != null) { return; } @@ -113,6 +113,7 @@ public class RedissonRemoteService extends BaseRemoteService implements RRemoteS @Override public void operationComplete(Future future) throws Exception { if (!future.isSuccess()) { + log.error("Can't process the remote service request.", future.cause()); if (future.cause() instanceof RedissonShutdownException) { return; } @@ -183,7 +184,7 @@ public class RedissonRemoteService extends BaseRemoteService implements RRemoteS private void executeMethod(final Class remoteInterface, final RBlockingQueue requestQueue, final ExecutorService executor, final RemoteServiceRequest request) { - final RemoteServiceMethod method = beans.get(new RemoteServiceKey(remoteInterface, request.getMethodName())); + final RemoteServiceMethod method = beans.get(new RemoteServiceKey(remoteInterface, request.getMethodName(), request.getSignatures())); final String responseName = getResponseQueueName(remoteInterface, request.getRequestId()); RBlockingQueue cancelRequestQueue = diff --git a/redisson/src/main/java/org/redisson/RedissonScoredSortedSet.java b/redisson/src/main/java/org/redisson/RedissonScoredSortedSet.java index 21d39c465..a4d7cb95f 100644 --- a/redisson/src/main/java/org/redisson/RedissonScoredSortedSet.java +++ b/redisson/src/main/java/org/redisson/RedissonScoredSortedSet.java @@ -25,13 +25,16 @@ import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.Map.Entry; import org.redisson.api.RFuture; import org.redisson.api.RScoredSortedSet; +import org.redisson.api.SortOrder; import org.redisson.client.codec.Codec; import org.redisson.client.codec.DoubleCodec; import org.redisson.client.codec.LongCodec; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.codec.ScoredCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; @@ -39,8 +42,15 @@ import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.ScoredEntry; import org.redisson.client.protocol.convertor.BooleanReplayConvertor; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; +/** + * + * @author Nikita Koksharov + * + * @param value type + */ public class RedissonScoredSortedSet extends RedissonExpirable implements RScoredSortedSet { public RedissonScoredSortedSet(CommandAsyncExecutor commandExecutor, String name) { @@ -251,8 +261,8 @@ public class RedissonScoredSortedSet extends RedissonExpirable implements RSc return commandExecutor.readAsync(getName(), codec, RedisCommands.ZRANK_INT, getName(), o); } - private ListScanResult scanIterator(InetSocketAddress client, long startPos) { - RFuture> f = commandExecutor.readAsync(client, getName(), codec, RedisCommands.ZSCAN, getName(), startPos); + private ListScanResult scanIterator(InetSocketAddress client, long startPos) { + RFuture> f = commandExecutor.readAsync(client, getName(), new ScanCodec(codec), RedisCommands.ZSCAN, getName(), startPos); return get(f); } @@ -261,7 +271,7 @@ public class RedissonScoredSortedSet extends RedissonExpirable implements RSc return new RedissonBaseIterator() { @Override - ListScanResult iterator(InetSocketAddress client, long nextIterPos) { + ListScanResult iterator(InetSocketAddress client, long nextIterPos) { return scanIterator(client, nextIterPos); } @@ -627,5 +637,168 @@ public class RedissonScoredSortedSet extends RedissonExpirable implements RSc return commandExecutor.writeAsync(getName(), LongCodec.INSTANCE, RedisCommands.ZUNIONSTORE_INT, args.toArray()); } + @Override + public Set readSort(SortOrder order) { + return get(readSortAsync(order)); + } + + @Override + public RFuture> readSortAsync(SortOrder order) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), order); + } + + @Override + public Set readSort(SortOrder order, int offset, int count) { + return get(readSortAsync(order, offset, count)); + } + + @Override + public RFuture> readSortAsync(SortOrder order, int offset, int count) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), "LIMIT", offset, count, order); + } + + @Override + public Set readSort(String byPattern, SortOrder order) { + return get(readSortAsync(byPattern, order)); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), "BY", byPattern, order); + } + + @Override + public Set readSort(String byPattern, SortOrder order, int offset, int count) { + return get(readSortAsync(byPattern, order, offset, count)); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order, int offset, int count) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), "BY", byPattern, "LIMIT", offset, count, order); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order) { + return (Collection)get(readSortAsync(byPattern, getPatterns, order)); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order) { + return readSortAsync(byPattern, getPatterns, order, -1, -1); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return (Collection)get(readSortAsync(byPattern, getPatterns, order, offset, count)); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + List params = new ArrayList(); + params.add(getName()); + if (byPattern != null) { + params.add("BY"); + params.add(byPattern); + } + if (offset != -1 && count != -1) { + params.add("LIMIT"); + } + if (offset != -1) { + params.add(offset); + } + if (count != -1) { + params.add(count); + } + for (String pattern : getPatterns) { + params.add("GET"); + params.add(pattern); + } + params.add(order); + + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, params.toArray()); + } + + @Override + public int sortTo(String destName, SortOrder order) { + return get(sortToAsync(destName, order)); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order) { + return sortToAsync(destName, null, Collections.emptyList(), order, -1, -1); + } + + @Override + public int sortTo(String destName, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, order, offset, count)); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order, int offset, int count) { + return sortToAsync(destName, null, Collections.emptyList(), order, offset, count); + } + + @Override + public int sortTo(String destName, String byPattern, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, byPattern, order, offset, count)); + } + + @Override + public int sortTo(String destName, String byPattern, SortOrder order) { + return get(sortToAsync(destName, byPattern, order)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, SortOrder order) { + return sortToAsync(destName, byPattern, Collections.emptyList(), order, -1, -1); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, SortOrder order, int offset, int count) { + return sortToAsync(destName, byPattern, Collections.emptyList(), order, offset, count); + } + + @Override + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order) { + return get(sortToAsync(destName, byPattern, getPatterns, order)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order) { + return sortToAsync(destName, byPattern, getPatterns, order, -1, -1); + } + + @Override + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, byPattern, getPatterns, order, offset, count)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count) { + List params = new ArrayList(); + params.add(getName()); + if (byPattern != null) { + params.add("BY"); + params.add(byPattern); + } + if (offset != -1 && count != -1) { + params.add("LIMIT"); + } + if (offset != -1) { + params.add(offset); + } + if (count != -1) { + params.add(count); + } + for (String pattern : getPatterns) { + params.add("GET"); + params.add(pattern); + } + params.add(order); + params.add("STORE"); + params.add(destName); + + return commandExecutor.writeAsync(getName(), codec, RedisCommands.SORT_TO, params.toArray()); + } } diff --git a/redisson/src/main/java/org/redisson/RedissonSemaphore.java b/redisson/src/main/java/org/redisson/RedissonSemaphore.java index 0d406fdac..e65706242 100644 --- a/redisson/src/main/java/org/redisson/RedissonSemaphore.java +++ b/redisson/src/main/java/org/redisson/RedissonSemaphore.java @@ -79,7 +79,7 @@ public class RedissonSemaphore extends RedissonExpirable implements RSemaphore { } RFuture future = subscribe(); - get(future); + commandExecutor.syncSubscription(future); try { while (true) { if (tryAcquire(permits)) { diff --git a/redisson/src/main/java/org/redisson/RedissonSet.java b/redisson/src/main/java/org/redisson/RedissonSet.java index 6bda92d72..590461ccf 100644 --- a/redisson/src/main/java/org/redisson/RedissonSet.java +++ b/redisson/src/main/java/org/redisson/RedissonSet.java @@ -19,18 +19,22 @@ import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; +import java.util.Collections; import java.util.Iterator; import java.util.List; import java.util.Set; import org.redisson.api.RFuture; import org.redisson.api.RSet; +import org.redisson.api.SortOrder; import org.redisson.client.codec.Codec; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.convertor.BooleanReplayConvertor; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; /** @@ -40,7 +44,7 @@ import org.redisson.command.CommandAsyncExecutor; * * @param value */ -public class RedissonSet extends RedissonExpirable implements RSet { +public class RedissonSet extends RedissonExpirable implements RSet, ScanIterator { protected RedissonSet(CommandAsyncExecutor commandExecutor, String name) { super(commandExecutor, name); @@ -79,8 +83,9 @@ public class RedissonSet extends RedissonExpirable implements RSet { return getName(); } - ListScanResult scanIterator(String name, InetSocketAddress client, long startPos) { - RFuture> f = commandExecutor.readAsync(client, name, codec, RedisCommands.SSCAN, name, startPos); + @Override + public ListScanResult scanIterator(String name, InetSocketAddress client, long startPos) { + RFuture> f = commandExecutor.readAsync(client, name, new ScanCodec(codec), RedisCommands.SSCAN, name, startPos); return get(f); } @@ -89,7 +94,7 @@ public class RedissonSet extends RedissonExpirable implements RSet { return new RedissonBaseIterator() { @Override - ListScanResult iterator(InetSocketAddress client, long nextIterPos) { + ListScanResult iterator(InetSocketAddress client, long nextIterPos) { return scanIterator(getName(), client, nextIterPos); } @@ -143,6 +148,16 @@ public class RedissonSet extends RedissonExpirable implements RSet { return commandExecutor.writeAsync(getName(), codec, RedisCommands.SPOP_SINGLE, getName()); } + @Override + public Set removeRandom(int amount) { + return get(removeRandomAsync(amount)); + } + + @Override + public RFuture> removeRandomAsync(int amount) { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.SPOP, getName(), amount); + } + @Override public V random() { return get(randomAsync()); @@ -346,4 +361,168 @@ public class RedissonSet extends RedissonExpirable implements RSet { } } + @Override + public Set readSort(SortOrder order) { + return get(readSortAsync(order)); + } + + @Override + public RFuture> readSortAsync(SortOrder order) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), order); + } + + @Override + public Set readSort(SortOrder order, int offset, int count) { + return get(readSortAsync(order, offset, count)); + } + + @Override + public RFuture> readSortAsync(SortOrder order, int offset, int count) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), "LIMIT", offset, count, order); + } + + @Override + public Set readSort(String byPattern, SortOrder order) { + return get(readSortAsync(byPattern, order)); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), "BY", byPattern, order); + } + + @Override + public Set readSort(String byPattern, SortOrder order, int offset, int count) { + return get(readSortAsync(byPattern, order, offset, count)); + } + + @Override + public RFuture> readSortAsync(String byPattern, SortOrder order, int offset, int count) { + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, getName(), "BY", byPattern, "LIMIT", offset, count, order); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order) { + return (Collection)get(readSortAsync(byPattern, getPatterns, order)); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order) { + return readSortAsync(byPattern, getPatterns, order, -1, -1); + } + + @Override + public Collection readSort(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return (Collection)get(readSortAsync(byPattern, getPatterns, order, offset, count)); + } + + @Override + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order, int offset, int count) { + List params = new ArrayList(); + params.add(getName()); + if (byPattern != null) { + params.add("BY"); + params.add(byPattern); + } + if (offset != -1 && count != -1) { + params.add("LIMIT"); + } + if (offset != -1) { + params.add(offset); + } + if (count != -1) { + params.add(count); + } + for (String pattern : getPatterns) { + params.add("GET"); + params.add(pattern); + } + params.add(order); + + return commandExecutor.readAsync(getName(), codec, RedisCommands.SORT_SET, params.toArray()); + } + + @Override + public int sortTo(String destName, SortOrder order) { + return get(sortToAsync(destName, order)); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order) { + return sortToAsync(destName, null, Collections.emptyList(), order, -1, -1); + } + + @Override + public int sortTo(String destName, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, order, offset, count)); + } + + @Override + public RFuture sortToAsync(String destName, SortOrder order, int offset, int count) { + return sortToAsync(destName, null, Collections.emptyList(), order, offset, count); + } + + @Override + public int sortTo(String destName, String byPattern, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, byPattern, order, offset, count)); + } + + @Override + public int sortTo(String destName, String byPattern, SortOrder order) { + return get(sortToAsync(destName, byPattern, order)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, SortOrder order) { + return sortToAsync(destName, byPattern, Collections.emptyList(), order, -1, -1); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, SortOrder order, int offset, int count) { + return sortToAsync(destName, byPattern, Collections.emptyList(), order, offset, count); + } + + @Override + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order) { + return get(sortToAsync(destName, byPattern, getPatterns, order)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order) { + return sortToAsync(destName, byPattern, getPatterns, order, -1, -1); + } + + @Override + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count) { + return get(sortToAsync(destName, byPattern, getPatterns, order, offset, count)); + } + + @Override + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count) { + List params = new ArrayList(); + params.add(getName()); + if (byPattern != null) { + params.add("BY"); + params.add(byPattern); + } + if (offset != -1 && count != -1) { + params.add("LIMIT"); + } + if (offset != -1) { + params.add(offset); + } + if (count != -1) { + params.add(count); + } + for (String pattern : getPatterns) { + params.add("GET"); + params.add(pattern); + } + params.add(order); + params.add("STORE"); + params.add(destName); + + return commandExecutor.writeAsync(getName(), codec, RedisCommands.SORT_TO, params.toArray()); + } + } diff --git a/redisson/src/main/java/org/redisson/RedissonSetCache.java b/redisson/src/main/java/org/redisson/RedissonSetCache.java index 408853ce6..131b5ab96 100644 --- a/redisson/src/main/java/org/redisson/RedissonSetCache.java +++ b/redisson/src/main/java/org/redisson/RedissonSetCache.java @@ -28,13 +28,16 @@ import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; import org.redisson.api.RSetCache; import org.redisson.client.codec.Codec; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.RedisStrictCommand; import org.redisson.client.protocol.convertor.BooleanReplayConvertor; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; +import org.redisson.eviction.EvictionScheduler; /** *

Set-based cache with ability to set TTL for each entry via @@ -45,7 +48,7 @@ import org.redisson.command.CommandAsyncExecutor; * Thus values are checked for TTL expiration during any value read operation. * If entry expired then it doesn't returns and clean task runs asynchronous. * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * In addition there is {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* *

If eviction is not required then it's better to use {@link org.redisson.api.RSet}.

@@ -54,8 +57,16 @@ import org.redisson.command.CommandAsyncExecutor; * * @param value */ -public class RedissonSetCache extends RedissonExpirable implements RSetCache { +public class RedissonSetCache extends RedissonExpirable implements RSetCache, ScanIterator { + RedissonSetCache(CommandAsyncExecutor commandExecutor, String name) { + super(commandExecutor, name); + } + + RedissonSetCache(Codec codec, CommandAsyncExecutor commandExecutor, String name) { + super(codec, commandExecutor, name); + } + public RedissonSetCache(EvictionScheduler evictionScheduler, CommandAsyncExecutor commandExecutor, String name) { super(commandExecutor, name); evictionScheduler.schedule(getName()); @@ -88,7 +99,7 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< @Override public RFuture containsAsync(Object o) { - return commandExecutor.evalReadAsync(getName(), codec, new RedisStrictCommand("EVAL", new BooleanReplayConvertor(), 5), + return commandExecutor.evalReadAsync(getName(o), codec, new RedisStrictCommand("EVAL", new BooleanReplayConvertor(), 5), "local expireDateScore = redis.call('zscore', KEYS[1], ARGV[2]); " + "if expireDateScore ~= false then " + "if tonumber(expireDateScore) <= tonumber(ARGV[1]) then " + @@ -99,16 +110,16 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< "else " + "return 0;" + "end; ", - Arrays.asList(getName()), System.currentTimeMillis(), o); + Arrays.asList(getName(o)), System.currentTimeMillis(), o); } - ListScanResult scanIterator(InetSocketAddress client, long startPos) { - RFuture> f = scanIteratorAsync(client, startPos); + public ListScanResult scanIterator(String name, InetSocketAddress client, long startPos) { + RFuture> f = scanIteratorAsync(name, client, startPos); return get(f); } - public RFuture> scanIteratorAsync(InetSocketAddress client, long startPos) { - return commandExecutor.evalReadAsync(client, getName(), codec, RedisCommands.EVAL_ZSCAN, + public RFuture> scanIteratorAsync(String name, InetSocketAddress client, long startPos) { + return commandExecutor.evalReadAsync(client, name, new ScanCodec(codec), RedisCommands.EVAL_ZSCAN, "local result = {}; " + "local res = redis.call('zscan', KEYS[1], ARGV[1]); " + "for i, value in ipairs(res[2]) do " @@ -119,7 +130,7 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< + "end; " + "end;" + "end;" - + "return {res[1], result};", Arrays.asList(getName()), startPos, System.currentTimeMillis()); + + "return {res[1], result};", Arrays.asList(name), startPos, System.currentTimeMillis()); } @Override @@ -127,8 +138,8 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< return new RedissonBaseIterator() { @Override - ListScanResult iterator(InetSocketAddress client, long nextIterPos) { - return scanIterator(client, nextIterPos); + ListScanResult iterator(InetSocketAddress client, long nextIterPos) { + return scanIterator(getName(), client, nextIterPos); } @Override @@ -146,27 +157,18 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< @Override public RFuture> readAllAsync() { - return (RFuture>)readAllAsync(RedisCommands.ZRANGEBYSCORE); - } - - private RFuture readAllAsync(RedisCommand> command) { - return commandExecutor.readAsync(getName(), codec, command, getName(), System.currentTimeMillis(), 92233720368547758L); - } - - - private RFuture> readAllasListAsync() { - return (RFuture>)readAllAsync(RedisCommands.ZRANGEBYSCORE_LIST); + return commandExecutor.readAsync(getName(), codec, RedisCommands.ZRANGEBYSCORE, getName(), System.currentTimeMillis(), 92233720368547758L); } @Override public Object[] toArray() { - List res = get(readAllasListAsync()); + Set res = get(readAllAsync()); return res.toArray(); } @Override public T[] toArray(T[] a) { - List res = get(readAllasListAsync()); + Set res = get(readAllAsync()); return res.toArray(a); } @@ -196,14 +198,14 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< byte[] objectState = encode(value); long timeoutDate = System.currentTimeMillis() + unit.toMillis(ttl); - return commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_BOOLEAN, + return commandExecutor.evalWriteAsync(getName(value), codec, RedisCommands.EVAL_BOOLEAN, "local expireDateScore = redis.call('zscore', KEYS[1], ARGV[3]); " + "redis.call('zadd', KEYS[1], ARGV[2], ARGV[3]); " + "if expireDateScore ~= false and tonumber(expireDateScore) > tonumber(ARGV[1]) then " + "return 0;" + "end; " + "return 1; ", - Arrays.asList(getName()), System.currentTimeMillis(), timeoutDate, objectState); + Arrays.asList(getName(value)), System.currentTimeMillis(), timeoutDate, objectState); } @Override @@ -213,7 +215,7 @@ public class RedissonSetCache extends RedissonExpirable implements RSetCache< @Override public RFuture removeAsync(Object o) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.ZREM, getName(), o); + return commandExecutor.writeAsync(getName(o), codec, RedisCommands.ZREM, getName(o), o); } @Override diff --git a/redisson/src/main/java/org/redisson/RedissonSetMultimap.java b/redisson/src/main/java/org/redisson/RedissonSetMultimap.java index 30945073f..bb2c40e34 100644 --- a/redisson/src/main/java/org/redisson/RedissonSetMultimap.java +++ b/redisson/src/main/java/org/redisson/RedissonSetMultimap.java @@ -23,6 +23,7 @@ import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; @@ -47,12 +48,12 @@ public class RedissonSetMultimap extends RedissonMultimap implements private static final RedisStrictCommand SCARD_VALUE = new RedisStrictCommand("SCARD", new BooleanAmountReplayConvertor()); private static final RedisCommand SISMEMBER_VALUE = new RedisCommand("SISMEMBER", new BooleanReplayConvertor()); - RedissonSetMultimap(CommandAsyncExecutor connectionManager, String name) { - super(connectionManager, name); + RedissonSetMultimap(UUID id, CommandAsyncExecutor connectionManager, String name) { + super(id, connectionManager, name); } - RedissonSetMultimap(Codec codec, CommandAsyncExecutor connectionManager, String name) { - super(codec, connectionManager, name); + RedissonSetMultimap(UUID id, Codec codec, CommandAsyncExecutor connectionManager, String name) { + super(id, codec, connectionManager, name); } @Override diff --git a/redisson/src/main/java/org/redisson/RedissonSetMultimapCache.java b/redisson/src/main/java/org/redisson/RedissonSetMultimapCache.java index b212c2513..44d436356 100644 --- a/redisson/src/main/java/org/redisson/RedissonSetMultimapCache.java +++ b/redisson/src/main/java/org/redisson/RedissonSetMultimapCache.java @@ -17,6 +17,7 @@ package org.redisson; import java.util.Arrays; import java.util.Collection; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; @@ -25,6 +26,7 @@ import org.redisson.api.RSetMultimapCache; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommands; import org.redisson.command.CommandAsyncExecutor; +import org.redisson.eviction.EvictionScheduler; /** * @author Nikita Koksharov @@ -36,14 +38,14 @@ public class RedissonSetMultimapCache extends RedissonSetMultimap im private final RedissonMultimapCache baseCache; - RedissonSetMultimapCache(EvictionScheduler evictionScheduler, CommandAsyncExecutor connectionManager, String name) { - super(connectionManager, name); + RedissonSetMultimapCache(UUID id, EvictionScheduler evictionScheduler, CommandAsyncExecutor connectionManager, String name) { + super(id, connectionManager, name); evictionScheduler.scheduleCleanMultimap(name, getTimeoutSetName()); baseCache = new RedissonMultimapCache(connectionManager, name, codec, getTimeoutSetName()); } - RedissonSetMultimapCache(EvictionScheduler evictionScheduler, Codec codec, CommandAsyncExecutor connectionManager, String name) { - super(codec, connectionManager, name); + RedissonSetMultimapCache(UUID id, EvictionScheduler evictionScheduler, Codec codec, CommandAsyncExecutor connectionManager, String name) { + super(id, codec, connectionManager, name); evictionScheduler.scheduleCleanMultimap(name, getTimeoutSetName()); baseCache = new RedissonMultimapCache(connectionManager, name, codec, getTimeoutSetName()); } diff --git a/redisson/src/main/java/org/redisson/RedissonSetMultimapValues.java b/redisson/src/main/java/org/redisson/RedissonSetMultimapValues.java index 3d9ceffe1..239a7d771 100644 --- a/redisson/src/main/java/org/redisson/RedissonSetMultimapValues.java +++ b/redisson/src/main/java/org/redisson/RedissonSetMultimapValues.java @@ -27,7 +27,9 @@ import java.util.concurrent.TimeUnit; import org.redisson.api.RFuture; import org.redisson.api.RSet; +import org.redisson.api.SortOrder; import org.redisson.client.codec.Codec; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.RedisCommands; @@ -38,6 +40,7 @@ import org.redisson.client.protocol.decoder.ListScanResultReplayDecoder; import org.redisson.client.protocol.decoder.NestedMultiDecoder; import org.redisson.client.protocol.decoder.ObjectListReplayDecoder; import org.redisson.client.protocol.decoder.ObjectSetReplayDecoder; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandAsyncExecutor; /** @@ -55,6 +58,7 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R private static final RedisCommand EVAL_CONTAINS_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 6, Arrays.asList(ValueType.MAP_KEY, ValueType.MAP_VALUE)); private static final RedisCommand EVAL_CONTAINS_ALL_WITH_VALUES = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, ValueType.OBJECTS); + private final RSet set; private final Object key; private final String timeoutSetName; @@ -62,6 +66,7 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R super(codec, commandExecutor, name); this.timeoutSetName = timeoutSetName; this.key = key; + this.set = new RedissonSet(codec, commandExecutor, name); } @Override @@ -158,8 +163,8 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R Arrays.asList(timeoutSetName, getName()), System.currentTimeMillis(), key, o); } - private ListScanResult scanIterator(InetSocketAddress client, long startPos) { - RFuture> f = commandExecutor.evalReadAsync(client, getName(), codec, EVAL_SSCAN, + private ListScanResult scanIterator(InetSocketAddress client, long startPos) { + RFuture> f = commandExecutor.evalReadAsync(client, getName(), new ScanCodec(codec), EVAL_SSCAN, "local expireDate = 92233720368547758; " + "local expireDateScore = redis.call('zscore', KEYS[1], ARGV[3]); " + "if expireDateScore ~= false then " @@ -179,7 +184,7 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R return new RedissonBaseIterator() { @Override - ListScanResult iterator(InetSocketAddress client, long nextIterPos) { + ListScanResult iterator(InetSocketAddress client, long nextIterPos) { return scanIterator(client, nextIterPos); } @@ -225,17 +230,17 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R @Override public boolean add(V e) { - return get(addAsync(e)); + return set.add(e); } @Override public RFuture addAsync(V e) { - return commandExecutor.writeAsync(getName(), codec, RedisCommands.SADD_SINGLE, getName(), e); + return set.addAsync(e); } @Override public V removeRandom() { - return get(removeRandomAsync()); + return set.removeRandom(); } @Override @@ -243,6 +248,16 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R return commandExecutor.writeAsync(getName(), codec, RedisCommands.SPOP_SINGLE, getName()); } + @Override + public Set removeRandom(int amount) { + return get(removeRandomAsync(amount)); + } + + @Override + public RFuture> removeRandomAsync(int amount) { + return commandExecutor.writeAsync(getName(), codec, RedisCommands.SPOP, getName(), amount); + } + @Override public V random() { return get(randomAsync()); @@ -505,4 +520,106 @@ public class RedissonSetMultimapValues extends RedissonExpirable implements R return commandExecutor.writeAsync(getName(), codec, RedisCommands.SINTER, args.toArray()); } + public RFuture> readSortAsync(SortOrder order) { + return set.readSortAsync(order); + } + + public Set readSort(SortOrder order) { + return set.readSort(order); + } + + public RFuture> readSortAsync(SortOrder order, int offset, int count) { + return set.readSortAsync(order, offset, count); + } + + public Set readSort(SortOrder order, int offset, int count) { + return set.readSort(order, offset, count); + } + + public Set readSort(String byPattern, SortOrder order) { + return set.readSort(byPattern, order); + } + + public RFuture> readSortAsync(String byPattern, SortOrder order) { + return set.readSortAsync(byPattern, order); + } + + public Set readSort(String byPattern, SortOrder order, int offset, int count) { + return set.readSort(byPattern, order, offset, count); + } + + public RFuture> readSortAsync(String byPattern, SortOrder order, int offset, int count) { + return set.readSortAsync(byPattern, order, offset, count); + } + + public Collection readSort(String byPattern, List getPatterns, SortOrder order) { + return set.readSort(byPattern, getPatterns, order); + } + + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order) { + return set.readSortAsync(byPattern, getPatterns, order); + } + + public Collection readSort(String byPattern, List getPatterns, SortOrder order, int offset, + int count) { + return set.readSort(byPattern, getPatterns, order, offset, count); + } + + public RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order, + int offset, int count) { + return set.readSortAsync(byPattern, getPatterns, order, offset, count); + } + + public int sortTo(String destName, SortOrder order) { + return set.sortTo(destName, order); + } + + public RFuture sortToAsync(String destName, SortOrder order) { + return set.sortToAsync(destName, order); + } + + public int sortTo(String destName, SortOrder order, int offset, int count) { + return set.sortTo(destName, order, offset, count); + } + + public RFuture sortToAsync(String destName, SortOrder order, int offset, int count) { + return set.sortToAsync(destName, order, offset, count); + } + + public int sortTo(String destName, String byPattern, SortOrder order) { + return set.sortTo(destName, byPattern, order); + } + + public RFuture sortToAsync(String destName, String byPattern, SortOrder order) { + return set.sortToAsync(destName, byPattern, order); + } + + public int sortTo(String destName, String byPattern, SortOrder order, int offset, int count) { + return set.sortTo(destName, byPattern, order, offset, count); + } + + public RFuture sortToAsync(String destName, String byPattern, SortOrder order, int offset, int count) { + return set.sortToAsync(destName, byPattern, order, offset, count); + } + + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order) { + return set.sortTo(destName, byPattern, getPatterns, order); + } + + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order) { + return set.sortToAsync(destName, byPattern, getPatterns, order); + } + + public int sortTo(String destName, String byPattern, List getPatterns, SortOrder order, int offset, + int count) { + return set.sortTo(destName, byPattern, getPatterns, order, offset, count); + } + + public RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order, + int offset, int count) { + return set.sortToAsync(destName, byPattern, getPatterns, order, offset, count); + } + + + } diff --git a/redisson/src/main/java/org/redisson/RedissonSortedSet.java b/redisson/src/main/java/org/redisson/RedissonSortedSet.java index ebd808ed1..5184a2789 100644 --- a/redisson/src/main/java/org/redisson/RedissonSortedSet.java +++ b/redisson/src/main/java/org/redisson/RedissonSortedSet.java @@ -16,7 +16,6 @@ package org.redisson; import java.io.ByteArrayOutputStream; -import java.io.IOException; import java.io.ObjectOutputStream; import java.io.Serializable; import java.math.BigInteger; @@ -26,6 +25,7 @@ import java.util.Collection; import java.util.Comparator; import java.util.Iterator; import java.util.NoSuchElementException; +import java.util.Set; import java.util.SortedSet; import org.redisson.api.RBucket; @@ -38,13 +38,11 @@ import org.redisson.client.protocol.RedisCommands; import org.redisson.command.CommandExecutor; import org.redisson.misc.RPromise; -import io.netty.channel.EventLoopGroup; - /** * * @author Nikita Koksharov * - * @param value + * @param value type */ public class RedissonSortedSet extends RedissonObject implements RSortedSet { @@ -162,6 +160,16 @@ public class RedissonSortedSet extends RedissonObject implements RSortedSet readAll() { + return get(readAllAsync()); + } + + @Override + public RFuture> readAllAsync() { + return commandExecutor.readAsync(getName(), codec, RedisCommands.LRANGE_SET, getName(), 0, -1); + } + @Override public int size() { return list.size(); @@ -203,12 +211,7 @@ public class RedissonSortedSet extends RedissonObject implements RSortedSet extends RedissonObject implements RSortedSet addAsync(final V value) { final RPromise promise = newPromise(); - commandExecutor.getConnectionManager().getGroup().execute(new Runnable() { + commandExecutor.getConnectionManager().getExecutor().execute(new Runnable() { public void run() { try { boolean res = add(value); @@ -255,10 +258,8 @@ public class RedissonSortedSet extends RedissonObject implements RSortedSet removeAsync(final V value) { - EventLoopGroup group = commandExecutor.getConnectionManager().getGroup(); final RPromise promise = newPromise(); - - group.execute(new Runnable() { + commandExecutor.getConnectionManager().getExecutor().execute(new Runnable() { @Override public void run() { try { @@ -316,7 +317,7 @@ public class RedissonSortedSet extends RedissonObject implements RSortedSet c) { boolean changed = false; - for (Iterator iterator = iterator(); iterator.hasNext();) { + for (Iterator iterator = iterator(); iterator.hasNext();) { Object object = (Object) iterator.next(); if (!c.contains(object)) { iterator.remove(); @@ -404,6 +405,7 @@ public class RedissonSortedSet extends RedissonObject implements RSortedSet binarySearch(V value, Codec codec) { int size = list.size(); int upperIndex = size - 1; diff --git a/redisson/src/main/java/org/redisson/RedissonSubList.java b/redisson/src/main/java/org/redisson/RedissonSubList.java index 27ac6b8d2..2246e1575 100644 --- a/redisson/src/main/java/org/redisson/RedissonSubList.java +++ b/redisson/src/main/java/org/redisson/RedissonSubList.java @@ -470,7 +470,7 @@ public class RedissonSubList extends RedissonList implements RList { } @Override - public RFuture trimAsync(long fromIndex, long toIndex) { + public RFuture trimAsync(int fromIndex, int toIndex) { if (fromIndex < this.fromIndex || toIndex >= this.toIndex.get()) { throw new IndexOutOfBoundsException("fromIndex: " + fromIndex + " toIndex: " + toIndex); } diff --git a/redisson/src/main/java/org/redisson/RedissonTopic.java b/redisson/src/main/java/org/redisson/RedissonTopic.java index 5bd324cf4..b5c0242c0 100644 --- a/redisson/src/main/java/org/redisson/RedissonTopic.java +++ b/redisson/src/main/java/org/redisson/RedissonTopic.java @@ -79,10 +79,49 @@ public class RedissonTopic implements RTopic { private int addListener(RedisPubSubListener pubSubListener) { RFuture future = commandExecutor.getConnectionManager().subscribe(codec, name, pubSubListener); - future.syncUninterruptibly(); + commandExecutor.syncSubscription(future); return System.identityHashCode(pubSubListener); } + @Override + public void removeAllListeners() { + AsyncSemaphore semaphore = commandExecutor.getConnectionManager().getSemaphore(name); + semaphore.acquireUninterruptibly(); + + PubSubConnectionEntry entry = commandExecutor.getConnectionManager().getPubSubEntry(name); + if (entry == null) { + semaphore.release(); + return; + } + + entry.removeAllListeners(name); + if (!entry.hasListeners(name)) { + commandExecutor.getConnectionManager().unsubscribe(name, semaphore); + } else { + semaphore.release(); + } + } + + @Override + public void removeListener(MessageListener listener) { + AsyncSemaphore semaphore = commandExecutor.getConnectionManager().getSemaphore(name); + semaphore.acquireUninterruptibly(); + + PubSubConnectionEntry entry = commandExecutor.getConnectionManager().getPubSubEntry(name); + if (entry == null) { + semaphore.release(); + return; + } + + entry.removeListener(name, listener); + if (!entry.hasListeners(name)) { + commandExecutor.getConnectionManager().unsubscribe(name, semaphore); + } else { + semaphore.release(); + } + + } + @Override public void removeListener(int listenerId) { AsyncSemaphore semaphore = commandExecutor.getConnectionManager().getSemaphore(name); diff --git a/redisson/src/main/java/org/redisson/RedissonWriteLock.java b/redisson/src/main/java/org/redisson/RedissonWriteLock.java index adf427ac6..8b64ef31e 100644 --- a/redisson/src/main/java/org/redisson/RedissonWriteLock.java +++ b/redisson/src/main/java/org/redisson/RedissonWriteLock.java @@ -52,6 +52,11 @@ public class RedissonWriteLock extends RedissonLock implements RLock { return "redisson_rwlock__{" + getName() + "}"; } + @Override + String getLockName(long threadId) { + return super.getLockName(threadId) + ":write"; + } + @Override RFuture tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand command) { internalLockLeaseTime = unit.toMillis(leaseTime); @@ -96,7 +101,10 @@ public class RedissonWriteLock extends RedissonLock implements RLock { "redis.call('hdel', KEYS[1], ARGV[3]); " + "if (redis.call('hlen', KEYS[1]) == 1) then " + "redis.call('del', KEYS[1]); " + - "redis.call('publish', KEYS[2], ARGV[1]); " + + "redis.call('publish', KEYS[2], ARGV[1]); " + + "else " + + // has unlocked read-locks + "redis.call('hset', KEYS[1], 'mode', 'read'); " + "end; " + "return 1; "+ "end; " + @@ -148,18 +156,4 @@ public class RedissonWriteLock extends RedissonLock implements RLock { return "write".equals(res); } - @Override - public boolean isHeldByCurrentThread() { - return commandExecutor.write(getName(), LongCodec.INSTANCE, RedisCommands.HEXISTS, getName(), getLockName(Thread.currentThread().getId())); - } - - @Override - public int getHoldCount() { - Long res = commandExecutor.write(getName(), LongCodec.INSTANCE, RedisCommands.HGET, getName(), getLockName(Thread.currentThread().getId())); - if (res == null) { - return 0; - } - return res.intValue(); - } - } diff --git a/redisson/src/main/java/org/redisson/misc/URIBuilder.java b/redisson/src/main/java/org/redisson/ScanIterator.java similarity index 62% rename from redisson/src/main/java/org/redisson/misc/URIBuilder.java rename to redisson/src/main/java/org/redisson/ScanIterator.java index aa956790d..5f06e8d45 100644 --- a/redisson/src/main/java/org/redisson/misc/URIBuilder.java +++ b/redisson/src/main/java/org/redisson/ScanIterator.java @@ -13,20 +13,17 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.redisson.misc; +package org.redisson; -import java.net.URI; +import java.net.InetSocketAddress; -public class URIBuilder { +import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; - public static URI create(String uri) { - String[] parts = uri.split(":"); - if (parts.length-1 >= 3) { - String port = parts[parts.length-1]; - uri = "[" + uri.replace(":" + port, "") + "]:" + port; - } +public interface ScanIterator { - return URI.create("//" + uri); - } + ListScanResult scanIterator(String name, InetSocketAddress client, long startPos); + + boolean remove(Object value); } diff --git a/redisson/src/main/java/org/redisson/api/ClusterNode.java b/redisson/src/main/java/org/redisson/api/ClusterNode.java index 259b612a1..b08cac43b 100644 --- a/redisson/src/main/java/org/redisson/api/ClusterNode.java +++ b/redisson/src/main/java/org/redisson/api/ClusterNode.java @@ -25,12 +25,15 @@ import java.util.Map; */ public interface ClusterNode extends Node { + // Use {@link #clusterInfo()} + @Deprecated + Map info(); + /** * Execute CLUSTER INFO operation. * - * @return Map extracted via each response line splitting - * by ':' symbol + * @return value mapped by field */ - Map info(); - + Map clusterInfo(); + } diff --git a/redisson/src/main/java/org/redisson/CronSchedule.java b/redisson/src/main/java/org/redisson/api/CronSchedule.java similarity index 97% rename from redisson/src/main/java/org/redisson/CronSchedule.java rename to redisson/src/main/java/org/redisson/api/CronSchedule.java index 448e23e7f..7bb71b954 100644 --- a/redisson/src/main/java/org/redisson/CronSchedule.java +++ b/redisson/src/main/java/org/redisson/api/CronSchedule.java @@ -13,9 +13,8 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.redisson; +package org.redisson.api; -import org.redisson.api.RScheduledExecutorService; import org.redisson.executor.CronExpression; /** @@ -27,7 +26,7 @@ import org.redisson.executor.CronExpression; * @author Nikita Koksharov * */ -public class CronSchedule { +public final class CronSchedule { private CronExpression expression; diff --git a/redisson/src/main/java/org/redisson/api/LocalCachedMapOptions.java b/redisson/src/main/java/org/redisson/api/LocalCachedMapOptions.java index 8e3f72e26..89a600aa5 100644 --- a/redisson/src/main/java/org/redisson/api/LocalCachedMapOptions.java +++ b/redisson/src/main/java/org/redisson/api/LocalCachedMapOptions.java @@ -86,7 +86,7 @@ public class LocalCachedMapOptions { } /** - * Sets cache size. If size is 0 then cache is unbounded. + * Sets cache size. If size is 0 then local cache is unbounded. * * @param cacheSize - size of cache * @return LocalCachedMapOptions instance diff --git a/redisson/src/main/java/org/redisson/api/Node.java b/redisson/src/main/java/org/redisson/api/Node.java index 58b32d836..be2831b89 100644 --- a/redisson/src/main/java/org/redisson/api/Node.java +++ b/redisson/src/main/java/org/redisson/api/Node.java @@ -16,6 +16,7 @@ package org.redisson.api; import java.net.InetSocketAddress; +import java.util.Map; /** * Redis node interface @@ -23,8 +24,12 @@ import java.net.InetSocketAddress; * @author Nikita Koksharov * */ -public interface Node { +public interface Node extends NodeAsync { + enum InfoSection {ALL, DEFAULT, SERVER, CLIENTS, MEMORY, PERSISTENCE, STATS, REPLICATION, CPU, COMMANDSTATS, CLUSTER, KEYSPACE} + + Map info(InfoSection section); + /** * Returns current Redis server time in seconds * @@ -52,5 +57,5 @@ public interface Node { * @return true if PONG received, false otherwise */ boolean ping(); - + } diff --git a/redisson/src/main/java/org/redisson/api/NodeAsync.java b/redisson/src/main/java/org/redisson/api/NodeAsync.java new file mode 100644 index 000000000..af057c334 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/NodeAsync.java @@ -0,0 +1,38 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.Map; + +import org.redisson.api.Node.InfoSection; + +/** + * Redis node interface + * + * @author Nikita Koksharov + * + */ +public interface NodeAsync { + + RFuture> infoAsync(InfoSection section); + + RFuture timeAsync(); + + RFuture pingAsync(); + + RFuture> clusterInfoAsync(); + +} diff --git a/redisson/src/main/java/org/redisson/api/NodesGroup.java b/redisson/src/main/java/org/redisson/api/NodesGroup.java index 4b9c0cf98..ff77e7b75 100644 --- a/redisson/src/main/java/org/redisson/api/NodesGroup.java +++ b/redisson/src/main/java/org/redisson/api/NodesGroup.java @@ -43,7 +43,15 @@ public interface NodesGroup { void removeConnectionListener(int listenerId); /** - * Get all nodes by type + * Get Redis node by address in format: host:port + * + * @param address of node + * @return node + */ + N getNode(String address); + + /** + * Get all Redis nodes by type * * @param type - type of node * @return collection of nodes diff --git a/redisson/src/main/java/org/redisson/api/RBatch.java b/redisson/src/main/java/org/redisson/api/RBatch.java index 199e6a00c..802b3ad4e 100644 --- a/redisson/src/main/java/org/redisson/api/RBatch.java +++ b/redisson/src/main/java/org/redisson/api/RBatch.java @@ -403,7 +403,9 @@ public interface RBatch { * Command replies are skipped such approach saves response bandwidth. *

* If cluster configuration used then operations are grouped by slot ids - * and may be executed on different servers. Thus command execution order could be changed + * and may be executed on different servers. Thus command execution order could be changed. + *

+ * NOTE: Redis 3.2+ required * * @throws RedisException in case of any error * @@ -416,6 +418,8 @@ public interface RBatch { *

* If cluster configuration used then operations are grouped by slot ids * and may be executed on different servers. Thus command execution order could be changed + *

+ * NOTE: Redis 3.2+ required * * @return void * @throws RedisException in case of any error diff --git a/redisson/src/main/java/org/redisson/api/RBinaryStream.java b/redisson/src/main/java/org/redisson/api/RBinaryStream.java new file mode 100644 index 000000000..f9ff67e69 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RBinaryStream.java @@ -0,0 +1,45 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.io.InputStream; +import java.io.OutputStream; + +/** + * Binary stream holder. Maximum size of stream is limited by available memory of Redis master node. + * + * @author Nikita Koksharov + * + */ +public interface RBinaryStream extends RBucket { + + /** + * Returns inputStream which reads binary stream. + * This stream isn't thread-safe. + * + * @return stream + */ + InputStream getInputStream(); + + /** + * Returns outputStream which writes binary stream. + * This stream isn't thread-safe. + * + * @return stream + */ + OutputStream getOutputStream(); + +} diff --git a/redisson/src/main/java/org/redisson/api/RBlockingFairQueue.java b/redisson/src/main/java/org/redisson/api/RBlockingFairQueue.java new file mode 100644 index 000000000..f61fa3a38 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RBlockingFairQueue.java @@ -0,0 +1,28 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +/** + * Blocking queue with fair polling and + * guarantees access order for poll and take methods. + * + * @author Nikita Koksharov + * + * @param value + */ +public interface RBlockingFairQueue extends RBlockingQueue, RDestroyable { + +} diff --git a/redisson/src/main/java/org/redisson/api/RBloomFilter.java b/redisson/src/main/java/org/redisson/api/RBloomFilter.java index b1bd46c5c..57295e3e1 100644 --- a/redisson/src/main/java/org/redisson/api/RBloomFilter.java +++ b/redisson/src/main/java/org/redisson/api/RBloomFilter.java @@ -18,8 +18,6 @@ package org.redisson.api; /** * Bloom filter based on 64-bit hash derived from 128-bit hash (xxHash + FarmHash). * - * Code parts from Guava BloomFilter - * * @author Nikita Koksharov * * @param - type of object diff --git a/redisson/src/main/java/org/redisson/api/RBucket.java b/redisson/src/main/java/org/redisson/api/RBucket.java index 8384082ee..f05f42f12 100644 --- a/redisson/src/main/java/org/redisson/api/RBucket.java +++ b/redisson/src/main/java/org/redisson/api/RBucket.java @@ -18,7 +18,7 @@ package org.redisson.api; import java.util.concurrent.TimeUnit; /** - * Any object holder + * Any object holder. Max size of object is 512MB * * @author Nikita Koksharov * @@ -31,7 +31,7 @@ public interface RBucket extends RExpirable, RBucketAsync { * * @return object size */ - int size(); + long size(); V get(); diff --git a/redisson/src/main/java/org/redisson/api/RBucketAsync.java b/redisson/src/main/java/org/redisson/api/RBucketAsync.java index 23508a2d7..d3658fa2c 100644 --- a/redisson/src/main/java/org/redisson/api/RBucketAsync.java +++ b/redisson/src/main/java/org/redisson/api/RBucketAsync.java @@ -31,7 +31,7 @@ public interface RBucketAsync extends RExpirableAsync { * * @return object size */ - RFuture sizeAsync(); + RFuture sizeAsync(); RFuture getAsync(); diff --git a/redisson/src/main/java/org/redisson/api/RDelayedQueue.java b/redisson/src/main/java/org/redisson/api/RDelayedQueue.java new file mode 100644 index 000000000..3340987c2 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RDelayedQueue.java @@ -0,0 +1,49 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.concurrent.TimeUnit; + +/** + * + * @author Nikita Koksharov + * + * @param value type + */ +public interface RDelayedQueue extends RQueue, RDestroyable { + + /** + * Inserts element into this queue with + * specified transfer delay to destination queue. + * + * @param e the element to add + * @param delay for transition + * @param timeUnit for delay + */ + void offer(V e, long delay, TimeUnit timeUnit); + + /** + * Inserts element into this queue with + * specified transfer delay to destination queue. + * + * @param e the element to add + * @param delay for transition + * @param timeUnit for delay + * @return void + */ + RFuture offerAsync(V e, long delay, TimeUnit timeUnit); + +} diff --git a/redisson/src/main/java/org/redisson/api/RDestroyable.java b/redisson/src/main/java/org/redisson/api/RDestroyable.java index 3d9615a21..5824da0e1 100644 --- a/redisson/src/main/java/org/redisson/api/RDestroyable.java +++ b/redisson/src/main/java/org/redisson/api/RDestroyable.java @@ -23,7 +23,7 @@ package org.redisson.api; public interface RDestroyable { /** - * Allows to destroy object then it's not necessary anymore. + * Destroys object when it's not necessary anymore. */ void destroy(); diff --git a/redisson/src/main/java/org/redisson/api/RKeys.java b/redisson/src/main/java/org/redisson/api/RKeys.java index 627d51001..73ca62ffa 100644 --- a/redisson/src/main/java/org/redisson/api/RKeys.java +++ b/redisson/src/main/java/org/redisson/api/RKeys.java @@ -17,8 +17,21 @@ package org.redisson.api; import java.util.Collection; +/** + * + * @author Nikita Koksharov + * + */ public interface RKeys extends RKeysAsync { + /** + * Checks if provided keys exist + * + * @param names of keys + * @return amount of existing keys + */ + Long isExists(String... names); + /** * Get Redis object type by key * diff --git a/redisson/src/main/java/org/redisson/api/RKeysAsync.java b/redisson/src/main/java/org/redisson/api/RKeysAsync.java index 252203a2e..70c9772a6 100644 --- a/redisson/src/main/java/org/redisson/api/RKeysAsync.java +++ b/redisson/src/main/java/org/redisson/api/RKeysAsync.java @@ -17,8 +17,21 @@ package org.redisson.api; import java.util.Collection; +/** + * + * @author Nikita Koksharov + * + */ public interface RKeysAsync { + /** + * Checks if provided keys exist + * + * @param names of keys + * @return amount of existing keys + */ + RFuture isExistsAsync(String... names); + /** * Get Redis object type by key * diff --git a/redisson/src/main/java/org/redisson/api/RList.java b/redisson/src/main/java/org/redisson/api/RList.java index 7aea672e0..b6dc6a05a 100644 --- a/redisson/src/main/java/org/redisson/api/RList.java +++ b/redisson/src/main/java/org/redisson/api/RList.java @@ -25,7 +25,7 @@ import java.util.RandomAccess; * * @param the type of elements held in this collection */ -public interface RList extends List, RExpirable, RListAsync, RandomAccess { +public interface RList extends List, RExpirable, RListAsync, RSortable>, RandomAccess { /** * Add element after elementToFind @@ -34,7 +34,7 @@ public interface RList extends List, RExpirable, RListAsync, RandomAcce * @param element - object to add * @return new list size */ - Integer addAfter(V elementToFind, V element); + int addAfter(V elementToFind, V element); /** * Add element before elementToFind @@ -43,7 +43,7 @@ public interface RList extends List, RExpirable, RListAsync, RandomAcce * @param element - object to add * @return new list size */ - Integer addBefore(V elementToFind, V element); + int addBefore(V elementToFind, V element); /** * Set element at index. diff --git a/redisson/src/main/java/org/redisson/api/RListAsync.java b/redisson/src/main/java/org/redisson/api/RListAsync.java index eb0d7d2de..48d10c2fb 100644 --- a/redisson/src/main/java/org/redisson/api/RListAsync.java +++ b/redisson/src/main/java/org/redisson/api/RListAsync.java @@ -26,7 +26,7 @@ import java.util.RandomAccess; * * @param the type of elements held in this collection */ -public interface RListAsync extends RCollectionAsync, RandomAccess { +public interface RListAsync extends RCollectionAsync, RSortableAsync>, RandomAccess { /** * Add element after elementToFind @@ -82,7 +82,7 @@ public interface RListAsync extends RCollectionAsync, RandomAccess { * @param toIndex - to index * @return void */ - RFuture trimAsync(long fromIndex, long toIndex); + RFuture trimAsync(int fromIndex, int toIndex); RFuture fastRemoveAsync(long index); diff --git a/redisson/src/main/java/org/redisson/api/RLiveObjectService.java b/redisson/src/main/java/org/redisson/api/RLiveObjectService.java index de0078c3b..db91ffb7c 100644 --- a/redisson/src/main/java/org/redisson/api/RLiveObjectService.java +++ b/redisson/src/main/java/org/redisson/api/RLiveObjectService.java @@ -26,16 +26,6 @@ package org.redisson.api; */ public interface RLiveObjectService { - /** - * Use {@link #persist(Object)} method instead - * - * @param entityClass Entity class - * @param Entity type - * @return Always returns a proxied object. Even it does not exist in redis. - */ - @Deprecated - T create(Class entityClass); - /** * Finds the entity from Redis with the id. * @@ -57,18 +47,6 @@ public interface RLiveObjectService { */ T get(Class entityClass, K id); - /** - * Use {@link #persist(Object)} method instead - * - * @param entityClass Entity class - * @param id identifier - * @param Entity type - * @param Key type - * @return Always returns a proxied object. Even it does not exist in redis. - */ - @Deprecated - T getOrCreate(Class entityClass, K id); - /** * Returns proxied object for the detached object. Discard all the * field values already in the detached instance. diff --git a/redisson/src/main/java/org/redisson/api/RLock.java b/redisson/src/main/java/org/redisson/api/RLock.java index 09395d9b4..c96c66025 100644 --- a/redisson/src/main/java/org/redisson/api/RLock.java +++ b/redisson/src/main/java/org/redisson/api/RLock.java @@ -27,7 +27,7 @@ import java.util.concurrent.locks.Lock; * */ -public interface RLock extends Lock, RExpirable { +public interface RLock extends Lock, RExpirable, RLockAsync { /** * Acquires the lock. @@ -112,18 +112,4 @@ public interface RLock extends Lock, RExpirable { */ int getHoldCount(); - RFuture forceUnlockAsync(); - - RFuture unlockAsync(); - - RFuture tryLockAsync(); - - RFuture lockAsync(); - - RFuture lockAsync(long leaseTime, TimeUnit unit); - - RFuture tryLockAsync(long waitTime, TimeUnit unit); - - RFuture tryLockAsync(long waitTime, long leaseTime, TimeUnit unit); - } diff --git a/redisson/src/main/java/org/redisson/api/RLockAsync.java b/redisson/src/main/java/org/redisson/api/RLockAsync.java new file mode 100644 index 000000000..329c2de65 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RLockAsync.java @@ -0,0 +1,43 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.concurrent.TimeUnit; + +/** + * Distributed implementation of {@link java.util.concurrent.locks.Lock} + * + * @author Nikita Koksharov + * + */ + +public interface RLockAsync extends RExpirableAsync { + + RFuture forceUnlockAsync(); + + RFuture unlockAsync(); + + RFuture tryLockAsync(); + + RFuture lockAsync(); + + RFuture lockAsync(long leaseTime, TimeUnit unit); + + RFuture tryLockAsync(long waitTime, TimeUnit unit); + + RFuture tryLockAsync(long waitTime, long leaseTime, TimeUnit unit); + +} diff --git a/redisson/src/main/java/org/redisson/api/RMap.java b/redisson/src/main/java/org/redisson/api/RMap.java index 415985715..cae8c6738 100644 --- a/redisson/src/main/java/org/redisson/api/RMap.java +++ b/redisson/src/main/java/org/redisson/api/RMap.java @@ -33,6 +33,14 @@ import java.util.concurrent.ConcurrentMap; */ public interface RMap extends ConcurrentMap, RExpirable, RMapAsync { + /** + * Returns RLock instance associated with key + * + * @param key - map key + * @return lock + */ + RLock getLock(K key); + /** * Returns size of value mapped by key in bytes * diff --git a/redisson/src/main/java/org/redisson/api/RMapCache.java b/redisson/src/main/java/org/redisson/api/RMapCache.java index 5df58bf73..5a181d9c4 100644 --- a/redisson/src/main/java/org/redisson/api/RMapCache.java +++ b/redisson/src/main/java/org/redisson/api/RMapCache.java @@ -24,12 +24,11 @@ import java.util.concurrent.TimeUnit; * *

Current redis implementation doesnt have map entry eviction functionality. * Thus entries are checked for TTL expiration during any key/value/entry read operation. - * If key/value/entry expired then it doesn't returns and clean task runs asynchronous. - * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * If key/value/entry expired then it doesn't returns. + * Expired tasks cleaned by {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* - *

If eviction is not required then it's better to use {@link org.redisson.reactive.RedissonMapReactive}.

+ *

If eviction is not required then it's better to use {@link org.redisson.RedissonMap}.

* * @author Nikita Koksharov * @@ -44,16 +43,13 @@ public interface RMapCache extends RMap, RMapCacheAsync { *

* Stores value mapped by key with specified time to live. * Entry expires after specified time to live. - *

- * If the map previously contained a mapping for - * the key, the old value is replaced by the specified value. * * @param key - map key * @param value - map value * @param ttl - time to live for key\value entry. * If 0 then stores infinitely. * @param ttlUnit - time unit - * @return previous associated value + * @return current associated value */ V putIfAbsent(K key, V value, long ttl, TimeUnit ttlUnit); @@ -63,9 +59,6 @@ public interface RMapCache extends RMap, RMapCacheAsync { *

* Stores value mapped by key with specified time to live and max idle time. * Entry expires when specified time to live or max idle time has expired. - *

- * If the map previously contained a mapping for - * the key, the old value is replaced by the specified value. * * @param key - map key * @param value - map value @@ -79,7 +72,7 @@ public interface RMapCache extends RMap, RMapCacheAsync { * if maxIdleTime and ttl params are equal to 0 * then entry stores infinitely. * - * @return previous associated value + * @return current associated value */ V putIfAbsent(K key, V value, long ttl, TimeUnit ttlUnit, long maxIdleTime, TimeUnit maxIdleUnit); @@ -137,7 +130,8 @@ public interface RMapCache extends RMap, RMapCacheAsync { * @param ttl - time to live for key\value entry. * If 0 then stores infinitely. * @param ttlUnit - time unit - * @return true if value has been set successfully + * @return true if key is a new key in the hash and value was set. + * false if key already exists in the hash and the value was updated. */ boolean fastPut(K key, V value, long ttl, TimeUnit ttlUnit); @@ -163,7 +157,8 @@ public interface RMapCache extends RMap, RMapCacheAsync { * if maxIdleTime and ttl params are equal to 0 * then entry stores infinitely. - * @return previous associated value + * @return true if key is a new key in the hash and value was set. + * false if key already exists in the hash and the value was updated. */ boolean fastPut(K key, V value, long ttl, TimeUnit ttlUnit, long maxIdleTime, TimeUnit maxIdleUnit); diff --git a/redisson/src/main/java/org/redisson/api/RMapCacheAsync.java b/redisson/src/main/java/org/redisson/api/RMapCacheAsync.java index 15a067cb8..979590ebb 100644 --- a/redisson/src/main/java/org/redisson/api/RMapCacheAsync.java +++ b/redisson/src/main/java/org/redisson/api/RMapCacheAsync.java @@ -18,18 +18,17 @@ package org.redisson.api; import java.util.concurrent.TimeUnit; /** - *

Async interface for map-based cache with ability to set TTL for each entry via - * {RMapCacheAsync#putAsync(K, V, long, TimeUnit)} or {RMapCacheAsync#putIfAbsentAsync(K, V, long, TimeUnit)} + *

Map-based cache with ability to set TTL for each entry via + * {@link RMapCache#put(Object, Object, long, TimeUnit)} or {@link RMapCache#putIfAbsent(Object, Object, long, TimeUnit)} * And therefore has an complex lua-scripts inside.

* - *

Current redis implementation doesnt have eviction functionality. + *

Current redis implementation doesnt have map entry eviction functionality. * Thus entries are checked for TTL expiration during any key/value/entry read operation. - * If key/value/entry expired then it doesn't returns and clean task runs asynchronous. - * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * If key/value/entry expired then it doesn't returns. + * Expired tasks cleaned by {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* - *

If eviction is not required then it's better to use {@link org.redisson.reactive.RedissonMapReactive}.

+ *

If eviction is not required then it's better to use {@link org.redisson.RedissonMap}.

* * @author Nikita Koksharov * diff --git a/redisson/src/main/java/org/redisson/api/RMapCacheReactive.java b/redisson/src/main/java/org/redisson/api/RMapCacheReactive.java index 751e05872..790b4822d 100644 --- a/redisson/src/main/java/org/redisson/api/RMapCacheReactive.java +++ b/redisson/src/main/java/org/redisson/api/RMapCacheReactive.java @@ -28,7 +28,7 @@ import org.reactivestreams.Publisher; * Thus entries are checked for TTL expiration during any key/value/entry read operation. * If key/value/entry expired then it doesn't returns and clean task runs asynchronous. * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * In addition there is {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* *

If eviction is not required then it's better to use {@link org.redisson.reactive.RedissonMapReactive}.

diff --git a/redisson/src/main/java/org/redisson/api/RMultimap.java b/redisson/src/main/java/org/redisson/api/RMultimap.java index 05f9b94f9..31212d450 100644 --- a/redisson/src/main/java/org/redisson/api/RMultimap.java +++ b/redisson/src/main/java/org/redisson/api/RMultimap.java @@ -29,6 +29,14 @@ import java.util.Set; */ public interface RMultimap extends RExpirable, RMultimapAsync { + /** + * Returns RLock instance associated with key + * + * @param key - map key + * @return lock + */ + RLock getLock(K key); + /** * Returns the number of key-value pairs in this multimap. * diff --git a/redisson/src/main/java/org/redisson/api/RPatternTopic.java b/redisson/src/main/java/org/redisson/api/RPatternTopic.java index eab373961..661b773b0 100644 --- a/redisson/src/main/java/org/redisson/api/RPatternTopic.java +++ b/redisson/src/main/java/org/redisson/api/RPatternTopic.java @@ -62,5 +62,18 @@ public interface RPatternTopic { * @param listenerId - id of message listener */ void removeListener(int listenerId); + + /** + * Removes the listener by its instance + * + * @param listener - listener instance + */ + void removeListener(PatternMessageListener listener); + + /** + * Removes all listeners from this topic + */ + void removeAllListeners(); + } diff --git a/redisson/src/main/java/org/redisson/api/RPriorityDeque.java b/redisson/src/main/java/org/redisson/api/RPriorityDeque.java new file mode 100644 index 000000000..8d29f1eba --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RPriorityDeque.java @@ -0,0 +1,28 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.Deque; + +/** + * + * @author Nikita Koksharov + * + * @param value type + */ +public interface RPriorityDeque extends Deque, RPriorityQueue { + +} diff --git a/redisson/src/main/java/org/redisson/api/RPriorityQueue.java b/redisson/src/main/java/org/redisson/api/RPriorityQueue.java new file mode 100644 index 000000000..3b8b35894 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RPriorityQueue.java @@ -0,0 +1,43 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.Comparator; +import java.util.List; +import java.util.Queue; + +/** + * + * @author Nikita Koksharov + * + * @param value type + */ +public interface RPriorityQueue extends Queue, RObject { + + Comparator comparator(); + + List readAll(); + + /** + * Sets new comparator only if current set is empty + * + * @param comparator for values + * @return true if new comparator setted + * false otherwise + */ + boolean trySetComparator(Comparator comparator); + +} diff --git a/redisson/src/main/java/org/redisson/api/RQueue.java b/redisson/src/main/java/org/redisson/api/RQueue.java index b63f367d4..380038023 100644 --- a/redisson/src/main/java/org/redisson/api/RQueue.java +++ b/redisson/src/main/java/org/redisson/api/RQueue.java @@ -15,6 +15,7 @@ */ package org.redisson.api; +import java.util.List; import java.util.Queue; /** @@ -28,6 +29,9 @@ public interface RQueue extends Queue, RExpirable, RQueueAsync { V pollLastAndOfferFirstTo(String dequeName); + @Deprecated V pollLastAndOfferFirstTo(RQueue deque); + List readAll(); + } diff --git a/redisson/src/main/java/org/redisson/api/RQueueAsync.java b/redisson/src/main/java/org/redisson/api/RQueueAsync.java index 5e4290e32..5b953a77c 100644 --- a/redisson/src/main/java/org/redisson/api/RQueueAsync.java +++ b/redisson/src/main/java/org/redisson/api/RQueueAsync.java @@ -15,6 +15,8 @@ */ package org.redisson.api; +import java.util.List; + /** * {@link java.util.Queue} backed by Redis * @@ -32,4 +34,6 @@ public interface RQueueAsync extends RCollectionAsync { RFuture pollLastAndOfferFirstToAsync(String queueName); + RFuture> readAllAsync(); + } diff --git a/redisson/src/main/java/org/redisson/api/RScheduledExecutorService.java b/redisson/src/main/java/org/redisson/api/RScheduledExecutorService.java index 32dc52da7..35bb5a845 100644 --- a/redisson/src/main/java/org/redisson/api/RScheduledExecutorService.java +++ b/redisson/src/main/java/org/redisson/api/RScheduledExecutorService.java @@ -18,8 +18,6 @@ package org.redisson.api; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; -import org.redisson.CronSchedule; - /** * Distributed implementation of {@link java.util.concurrent.ScheduledExecutorService} * diff --git a/redisson/src/main/java/org/redisson/api/RScheduledExecutorServiceAsync.java b/redisson/src/main/java/org/redisson/api/RScheduledExecutorServiceAsync.java index 13de21ea5..3e293c08c 100644 --- a/redisson/src/main/java/org/redisson/api/RScheduledExecutorServiceAsync.java +++ b/redisson/src/main/java/org/redisson/api/RScheduledExecutorServiceAsync.java @@ -18,8 +18,6 @@ package org.redisson.api; import java.util.concurrent.Callable; import java.util.concurrent.TimeUnit; -import org.redisson.CronSchedule; - /** * Distributed implementation of {@link java.util.concurrent.ScheduledExecutorService} * diff --git a/redisson/src/main/java/org/redisson/api/RScoredSortedSet.java b/redisson/src/main/java/org/redisson/api/RScoredSortedSet.java index 978ea14df..c6a2884a4 100644 --- a/redisson/src/main/java/org/redisson/api/RScoredSortedSet.java +++ b/redisson/src/main/java/org/redisson/api/RScoredSortedSet.java @@ -17,6 +17,7 @@ package org.redisson.api; import java.util.Collection; import java.util.Map; +import java.util.Set; import org.redisson.client.protocol.ScoredEntry; @@ -26,7 +27,7 @@ import org.redisson.client.protocol.ScoredEntry; * * @param value */ -public interface RScoredSortedSet extends RScoredSortedSetAsync, Iterable, RExpirable { +public interface RScoredSortedSet extends RScoredSortedSetAsync, Iterable, RExpirable, RSortable> { public enum Aggregate { diff --git a/redisson/src/main/java/org/redisson/api/RScoredSortedSetAsync.java b/redisson/src/main/java/org/redisson/api/RScoredSortedSetAsync.java index 6288b9cc4..1f2d761b8 100644 --- a/redisson/src/main/java/org/redisson/api/RScoredSortedSetAsync.java +++ b/redisson/src/main/java/org/redisson/api/RScoredSortedSetAsync.java @@ -17,6 +17,7 @@ package org.redisson.api; import java.util.Collection; import java.util.Map; +import java.util.Set; import org.redisson.api.RScoredSortedSet.Aggregate; import org.redisson.client.protocol.ScoredEntry; @@ -27,7 +28,7 @@ import org.redisson.client.protocol.ScoredEntry; * * @param value */ -public interface RScoredSortedSetAsync extends RExpirableAsync { +public interface RScoredSortedSetAsync extends RExpirableAsync, RSortableAsync> { RFuture pollLastAsync(); diff --git a/redisson/src/main/java/org/redisson/api/RSet.java b/redisson/src/main/java/org/redisson/api/RSet.java index f8e42c01b..2d0e92dc1 100644 --- a/redisson/src/main/java/org/redisson/api/RSet.java +++ b/redisson/src/main/java/org/redisson/api/RSet.java @@ -24,8 +24,16 @@ import java.util.Set; * * @param value */ -public interface RSet extends Set, RExpirable, RSetAsync { +public interface RSet extends Set, RExpirable, RSetAsync, RSortable> { + /** + * Removes and returns random elements from set + * + * @param amount of random values + * @return random values + */ + Set removeRandom(int amount); + /** * Removes and returns random element from set * diff --git a/redisson/src/main/java/org/redisson/api/RSetAsync.java b/redisson/src/main/java/org/redisson/api/RSetAsync.java index 9e8c0fd56..f5662502c 100644 --- a/redisson/src/main/java/org/redisson/api/RSetAsync.java +++ b/redisson/src/main/java/org/redisson/api/RSetAsync.java @@ -24,8 +24,17 @@ import java.util.Set; * * @param value */ -public interface RSetAsync extends RCollectionAsync { +public interface RSetAsync extends RCollectionAsync, RSortableAsync> { + /** + * Removes and returns random elements from set + * in async mode + * + * @param amount of random values + * @return random values + */ + RFuture> removeRandomAsync(int amount); + /** * Removes and returns random element from set * in async mode diff --git a/redisson/src/main/java/org/redisson/api/RSetCache.java b/redisson/src/main/java/org/redisson/api/RSetCache.java index 8efe0addd..7d2451307 100644 --- a/redisson/src/main/java/org/redisson/api/RSetCache.java +++ b/redisson/src/main/java/org/redisson/api/RSetCache.java @@ -26,7 +26,7 @@ import java.util.concurrent.TimeUnit; * Thus values are checked for TTL expiration during any value read operation. * If entry expired then it doesn't returns and clean task runs asynchronous. * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * In addition there is {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* *

If eviction is not required then it's better to use {@link org.redisson.api.RSet}.

diff --git a/redisson/src/main/java/org/redisson/api/RSortable.java b/redisson/src/main/java/org/redisson/api/RSortable.java new file mode 100644 index 000000000..502d7908c --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RSortable.java @@ -0,0 +1,157 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.Collection; +import java.util.List; + +/** + * + * @author Nikita Koksharov + * + * @param object type + */ +public interface RSortable extends RSortableAsync { + + /** + * Read data in sorted view + * + * @param order for sorted data + * @return sorted collection + */ + V readSort(SortOrder order); + + /** + * Read data in sorted view + * + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return sorted collection + */ + V readSort(SortOrder order, int offset, int count); + + /** + * Read data in sorted view + * + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @return sorted collection + */ + V readSort(String byPattern, SortOrder order); + + /** + * Read data in sorted view + * + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return sorted collection + */ + V readSort(String byPattern, SortOrder order, int offset, int count); + + /** + * Read data in sorted view + * + * @param object type + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @return sorted collection + */ + Collection readSort(String byPattern, List getPatterns, SortOrder order); + + /** + * Read data in sorted view + * + * @param object type + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return sorted collection + */ + Collection readSort(String byPattern, List getPatterns, SortOrder order, int offset, int count); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param order for sorted data + * @return length of sorted data + */ + int sortTo(String destName, SortOrder order); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return length of sorted data + */ + int sortTo(String destName, SortOrder order, int offset, int count); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @return length of sorted data + */ + int sortTo(String destName, String byPattern, SortOrder order); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return length of sorted data + */ + int sortTo(String destName, String byPattern, SortOrder order, int offset, int count); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @return length of sorted data + */ + int sortTo(String destName, String byPattern, List getPatterns, SortOrder order); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return length of sorted data + */ + int sortTo(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count); + +} diff --git a/redisson/src/main/java/org/redisson/api/RSortableAsync.java b/redisson/src/main/java/org/redisson/api/RSortableAsync.java new file mode 100644 index 000000000..9abf26660 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/RSortableAsync.java @@ -0,0 +1,157 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +import java.util.Collection; +import java.util.List; + +/** + * + * @author Nikita Koksharov + * + * @param object type + */ +public interface RSortableAsync { + + /** + * Read data in sorted view + * + * @param order for sorted data + * @return sorted collection + */ + RFuture readSortAsync(SortOrder order); + + /** + * Read data in sorted view + * + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return sorted collection + */ + RFuture readSortAsync(SortOrder order, int offset, int count); + + /** + * Read data in sorted view + * + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @return sorted collection + */ + RFuture readSortAsync(String byPattern, SortOrder order); + + /** + * Read data in sorted view + * + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return sorted collection + */ + RFuture readSortAsync(String byPattern, SortOrder order, int offset, int count); + + /** + * Read data in sorted view + * + * @param object type + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @return sorted collection + */ + RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order); + + /** + * Read data in sorted view + * + * @param object type + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return sorted collection + */ + RFuture> readSortAsync(String byPattern, List getPatterns, SortOrder order, int offset, int count); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param order for sorted data + * @return length of sorted data + */ + RFuture sortToAsync(String destName, SortOrder order); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return length of sorted data + */ + RFuture sortToAsync(String destName, SortOrder order, int offset, int count); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @return length of sorted data + */ + RFuture sortToAsync(String destName, String byPattern, SortOrder order); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return length of sorted data + */ + RFuture sortToAsync(String destName, String byPattern, SortOrder order, int offset, int count); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @return length of sorted data + */ + RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order); + + /** + * Sort data and store to destName list + * + * @param destName list object destination + * @param byPattern that is used to generate the keys that are used for sorting + * @param getPatterns that is used to load values by keys in sorted view + * @param order for sorted data + * @param offset of sorted data + * @param count of sorted data + * @return length of sorted data + */ + RFuture sortToAsync(String destName, String byPattern, List getPatterns, SortOrder order, int offset, int count); + +} diff --git a/redisson/src/main/java/org/redisson/api/RSortedSet.java b/redisson/src/main/java/org/redisson/api/RSortedSet.java index dc72886d6..315e010f6 100644 --- a/redisson/src/main/java/org/redisson/api/RSortedSet.java +++ b/redisson/src/main/java/org/redisson/api/RSortedSet.java @@ -16,10 +16,21 @@ package org.redisson.api; import java.util.Comparator; +import java.util.Set; import java.util.SortedSet; +/** + * + * @author Nikita Koksharov + * + * @param value type + */ public interface RSortedSet extends SortedSet, RObject { + Set readAll(); + + RFuture> readAllAsync(); + RFuture addAsync(V value); RFuture removeAsync(V value); diff --git a/redisson/src/main/java/org/redisson/api/RTopic.java b/redisson/src/main/java/org/redisson/api/RTopic.java index 5cc2e2cec..1c722e80d 100644 --- a/redisson/src/main/java/org/redisson/api/RTopic.java +++ b/redisson/src/main/java/org/redisson/api/RTopic.java @@ -64,6 +64,13 @@ public interface RTopic extends RTopicAsync { */ int addListener(StatusListener listener); + /** + * Removes the listener by its instance + * + * @param listener - listener instance + */ + void removeListener(MessageListener listener); + /** * Removes the listener by id for listening this topic * @@ -71,4 +78,9 @@ public interface RTopic extends RTopicAsync { */ void removeListener(int listenerId); + /** + * Removes all listeners from this topic + */ + void removeAllListeners(); + } diff --git a/redisson/src/main/java/org/redisson/api/RedissonClient.java b/redisson/src/main/java/org/redisson/api/RedissonClient.java index 5080ba008..2845c5035 100755 --- a/redisson/src/main/java/org/redisson/api/RedissonClient.java +++ b/redisson/src/main/java/org/redisson/api/RedissonClient.java @@ -31,6 +31,14 @@ import org.redisson.liveobject.provider.ResolverProvider; */ public interface RedissonClient { + /** + * Returns binary stream holder instance by name + * + * @param name of binary stream + * @return BinaryStream object + */ + RBinaryStream getBinaryStream(String name); + /** * Returns geospatial items holder instance by name. * @@ -496,14 +504,37 @@ public interface RedissonClient { */ RPatternTopic getPatternTopic(String pattern, Codec codec); + /** + * Returns unbounded fair queue instance by name. + * + * @param type of value + * @param name of queue + * @return queue + */ + RBlockingFairQueue getBlockingFairQueue(String name); + + RBlockingFairQueue getBlockingFairQueue(String name, Codec codec); + /** * Returns unbounded queue instance by name. * * @param type of value - * @param name - name of object - * @return Queue object + * @param name of object + * @return queue object */ RQueue getQueue(String name); + + /** + * Returns unbounded delayed queue instance by name. + *

+ * Could be attached to destination queue only. + * All elements are inserted with transfer delay to destination queue. + * + * @param type of value + * @param destinationQueue - destination queue + * @return Delayed queue object + */ + RDelayedQueue getDelayedQueue(RQueue destinationQueue); /** * Returns unbounded queue instance by name @@ -516,6 +547,50 @@ public interface RedissonClient { */ RQueue getQueue(String name, Codec codec); + /** + * Returns priority unbounded queue instance by name. + * It uses comparator to sort objects. + * + * @param type of value + * @param name of object + * @return Queue object + */ + RPriorityQueue getPriorityQueue(String name); + + /** + * Returns priority unbounded queue instance by name + * using provided codec for queue objects. + * It uses comparator to sort objects. + * + * @param type of value + * @param name - name of object + * @param codec - codec for message + * @return Queue object + */ + RPriorityQueue getPriorityQueue(String name, Codec codec); + + /** + * Returns priority unbounded deque instance by name. + * It uses comparator to sort objects. + * + * @param type of value + * @param name of object + * @return Queue object + */ + RPriorityDeque getPriorityDeque(String name); + + /** + * Returns priority unbounded deque instance by name + * using provided codec for queue objects. + * It uses comparator to sort objects. + * + * @param type of value + * @param name - name of object + * @param codec - codec for message + * @return Queue object + */ + RPriorityDeque getPriorityDeque(String name, Codec codec); + /** * Returns unbounded blocking queue instance by name. * diff --git a/redisson/src/main/java/org/redisson/api/SortOrder.java b/redisson/src/main/java/org/redisson/api/SortOrder.java new file mode 100644 index 000000000..a4f216847 --- /dev/null +++ b/redisson/src/main/java/org/redisson/api/SortOrder.java @@ -0,0 +1,25 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.api; + +/** + * + * @author Nikita Koksharov + * + */ +public enum SortOrder { + ASC, DESC +} diff --git a/redisson/src/main/java/org/redisson/client/RedisClient.java b/redisson/src/main/java/org/redisson/client/RedisClient.java index 36669800b..67f660b5d 100644 --- a/redisson/src/main/java/org/redisson/client/RedisClient.java +++ b/redisson/src/main/java/org/redisson/client/RedisClient.java @@ -16,7 +16,7 @@ package org.redisson.client; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.Map; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -31,7 +31,7 @@ import org.redisson.client.handler.ConnectionWatchdog; import org.redisson.client.protocol.RedisCommands; import org.redisson.misc.RPromise; import org.redisson.misc.RedissonPromise; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; import io.netty.bootstrap.Bootstrap; import io.netty.channel.Channel; @@ -70,15 +70,15 @@ public class RedisClient { private boolean hasOwnGroup; public RedisClient(String address) { - this(URIBuilder.create(address)); + this(URLBuilder.create(address)); } - public RedisClient(URI address) { + public RedisClient(URL address) { this(new HashedWheelTimer(), Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() * 2), new NioEventLoopGroup(), address); hasOwnGroup = true; } - public RedisClient(Timer timer, ExecutorService executor, EventLoopGroup group, URI address) { + public RedisClient(Timer timer, ExecutorService executor, EventLoopGroup group, URL address) { this(timer, executor, group, address.getHost(), address.getPort()); } @@ -97,7 +97,11 @@ public class RedisClient { public RedisClient(final Timer timer, ExecutorService executor, EventLoopGroup group, Class socketChannelClass, String host, int port, int connectTimeout, int commandTimeout) { + if (timer == null) { + throw new NullPointerException("timer param can't be null"); + } this.executor = executor; + this.timer = timer; addr = new InetSocketAddress(host, port); bootstrap = new Bootstrap().channel(socketChannelClass).group(group).remoteAddress(addr); bootstrap.handler(new ChannelInitializer() { @@ -227,11 +231,7 @@ public class RedisClient { return channels.close(); } - /** - * Execute INFO SERVER operation. - * - * @return Map extracted from each response line splitting by ':' symbol - */ + @Deprecated public Map serverInfo() { try { return serverInfoAsync().sync().get(); @@ -240,15 +240,10 @@ public class RedisClient { } } - /** - * Asynchronously execute INFO SERVER operation. - * - * @return A future for a map extracted from each response line splitting by - * ':' symbol - */ + @Deprecated public RFuture> serverInfoAsync() { final RedisConnection connection = connect(); - RFuture> async = connection.async(RedisCommands.SERVER_INFO); + RFuture> async = connection.async(RedisCommands.INFO_SERVER); async.addListener(new FutureListener>() { @Override public void operationComplete(Future> future) throws Exception { diff --git a/redisson/src/main/java/org/redisson/client/RedisConnection.java b/redisson/src/main/java/org/redisson/client/RedisConnection.java index 0d5b37070..23c64625c 100644 --- a/redisson/src/main/java/org/redisson/client/RedisConnection.java +++ b/redisson/src/main/java/org/redisson/client/RedisConnection.java @@ -18,6 +18,7 @@ package org.redisson.client; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import org.redisson.RedissonShutdownException; import org.redisson.api.RFuture; import org.redisson.client.codec.Codec; import org.redisson.client.handler.CommandsQueue; @@ -35,7 +36,6 @@ import io.netty.channel.ChannelFuture; import io.netty.util.AttributeKey; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; -import io.netty.util.concurrent.Promise; import io.netty.util.concurrent.ScheduledFuture; public class RedisConnection implements RedisCommands { @@ -51,15 +51,16 @@ public class RedisConnection implements RedisCommands { private ReconnectListener reconnectListener; private long lastUsageTime; - private final RFuture acquireFuture = RedissonPromise.newSucceededFuture(this); - public RedisConnection(RedisClient redisClient, Channel channel) { - super(); - this.redisClient = redisClient; + this(redisClient); updateChannel(channel); lastUsageTime = System.currentTimeMillis(); } + + protected RedisConnection(RedisClient redisClient) { + this.redisClient = redisClient; + } public static C getFrom(Channel channel) { return (C) channel.attr(RedisConnection.CONNECTION).get(); @@ -176,6 +177,11 @@ public class RedisConnection implements RedisCommands { timeout = redisClient.getCommandTimeout(); } + if (redisClient.getBootstrap().group().isShuttingDown()) { + RedissonShutdownException cause = new RedissonShutdownException("Redisson is shutdown"); + return RedissonPromise.newFailedFuture(cause); + } + final ScheduledFuture scheduledFuture = redisClient.getBootstrap().group().next().schedule(new Runnable() { @Override public void run() { @@ -240,10 +246,6 @@ public class RedisConnection implements RedisCommands { return getClass().getSimpleName() + "@" + System.identityHashCode(this) + " [redisClient=" + redisClient + ", channel=" + channel + "]"; } - public RFuture getAcquireFuture() { - return acquireFuture; - } - public void onDisconnect() { } diff --git a/redisson/src/main/java/org/redisson/client/RedisPubSubConnection.java b/redisson/src/main/java/org/redisson/client/RedisPubSubConnection.java index af723b860..7103d1af0 100644 --- a/redisson/src/main/java/org/redisson/client/RedisPubSubConnection.java +++ b/redisson/src/main/java/org/redisson/client/RedisPubSubConnection.java @@ -83,17 +83,17 @@ public class RedisPubSubConnection extends RedisConnection { } public void subscribe(Codec codec, String ... channel) { - async(new PubSubMessageDecoder(codec.getValueDecoder()), RedisCommands.SUBSCRIBE, channel); for (String ch : channel) { channels.put(ch, codec); } + async(new PubSubMessageDecoder(codec.getValueDecoder()), RedisCommands.SUBSCRIBE, channel); } public void psubscribe(Codec codec, String ... channel) { - async(new PubSubPatternMessageDecoder(codec.getValueDecoder()), RedisCommands.PSUBSCRIBE, channel); for (String ch : channel) { patternChannels.put(ch, codec); } + async(new PubSubPatternMessageDecoder(codec.getValueDecoder()), RedisCommands.PSUBSCRIBE, channel); } public void unsubscribe(final String ... channels) { diff --git a/redisson/src/main/java/org/redisson/client/codec/DelegateDecoderCodec.java b/redisson/src/main/java/org/redisson/client/codec/DelegateDecoderCodec.java index a6cc06f9d..e85be6166 100644 --- a/redisson/src/main/java/org/redisson/client/codec/DelegateDecoderCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/DelegateDecoderCodec.java @@ -17,6 +17,11 @@ package org.redisson.client.codec; import org.redisson.client.protocol.Decoder; +/** + * + * @author Nikita Koksharov + * + */ public class DelegateDecoderCodec extends StringCodec { private final Codec delegate; diff --git a/redisson/src/main/java/org/redisson/client/codec/DoubleCodec.java b/redisson/src/main/java/org/redisson/client/codec/DoubleCodec.java index 7255348f2..35c79bac0 100644 --- a/redisson/src/main/java/org/redisson/client/codec/DoubleCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/DoubleCodec.java @@ -22,6 +22,11 @@ import org.redisson.client.protocol.Decoder; import io.netty.buffer.ByteBuf; +/** + * + * @author Nikita Koksharov + * + */ public class DoubleCodec extends StringCodec { public static final DoubleCodec INSTANCE = new DoubleCodec(); diff --git a/redisson/src/main/java/org/redisson/client/codec/GeoEntryCodec.java b/redisson/src/main/java/org/redisson/client/codec/GeoEntryCodec.java index dbb44cce5..80cc8940e 100644 --- a/redisson/src/main/java/org/redisson/client/codec/GeoEntryCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/GeoEntryCodec.java @@ -17,6 +17,11 @@ package org.redisson.client.codec; import org.redisson.client.protocol.Encoder; +/** + * + * @author Nikita Koksharov + * + */ public class GeoEntryCodec extends StringCodec { private final ThreadLocal pos = new ThreadLocal() { diff --git a/redisson/src/main/java/org/redisson/client/codec/IntegerCodec.java b/redisson/src/main/java/org/redisson/client/codec/IntegerCodec.java index d0a6beedc..50fbf8494 100644 --- a/redisson/src/main/java/org/redisson/client/codec/IntegerCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/IntegerCodec.java @@ -22,6 +22,11 @@ import org.redisson.client.protocol.Decoder; import io.netty.buffer.ByteBuf; +/** + * + * @author Nikita Koksharov + * + */ public class IntegerCodec extends StringCodec { public static final IntegerCodec INSTANCE = new IntegerCodec(); diff --git a/redisson/src/main/java/org/redisson/client/codec/LongCodec.java b/redisson/src/main/java/org/redisson/client/codec/LongCodec.java index 7da934bfd..1e9708a75 100644 --- a/redisson/src/main/java/org/redisson/client/codec/LongCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/LongCodec.java @@ -22,6 +22,11 @@ import org.redisson.client.protocol.Decoder; import io.netty.buffer.ByteBuf; +/** + * + * @author Nikita Koksharov + * + */ public class LongCodec extends StringCodec { public static final LongCodec INSTANCE = new LongCodec(); diff --git a/redisson/src/main/java/org/redisson/client/codec/MapScanCodec.java b/redisson/src/main/java/org/redisson/client/codec/MapScanCodec.java new file mode 100644 index 000000000..8ee35fd74 --- /dev/null +++ b/redisson/src/main/java/org/redisson/client/codec/MapScanCodec.java @@ -0,0 +1,100 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.client.codec; + +import java.io.IOException; + +import org.redisson.client.handler.State; +import org.redisson.client.protocol.Decoder; +import org.redisson.client.protocol.Encoder; +import org.redisson.client.protocol.decoder.ScanObjectEntry; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; + +/** + * + * @author Nikita Koksharov + * + */ +public class MapScanCodec implements Codec { + + private final Codec delegate; + private final Codec mapValueCodec; + + public MapScanCodec(Codec delegate) { + this(delegate, null); + } + + public MapScanCodec(Codec delegate, Codec mapValueCodec) { + this.delegate = delegate; + this.mapValueCodec = mapValueCodec; + } + + @Override + public Decoder getValueDecoder() { + return delegate.getValueDecoder(); + } + + @Override + public Encoder getValueEncoder() { + return delegate.getValueEncoder(); + } + + @Override + public Decoder getMapValueDecoder() { + return new Decoder() { + @Override + public Object decode(ByteBuf buf, State state) throws IOException { + ByteBuf b = Unpooled.copiedBuffer(buf); + Codec c = delegate; + if (mapValueCodec != null) { + c = mapValueCodec; + } + Object val = c.getMapValueDecoder().decode(buf, state); + return new ScanObjectEntry(b, val); + } + }; + } + + @Override + public Encoder getMapValueEncoder() { + Codec c = delegate; + if (mapValueCodec != null) { + c = mapValueCodec; + } + + return c.getMapValueEncoder(); + } + + @Override + public Decoder getMapKeyDecoder() { + return new Decoder() { + @Override + public Object decode(ByteBuf buf, State state) throws IOException { + ByteBuf b = Unpooled.copiedBuffer(buf); + Object val = delegate.getMapKeyDecoder().decode(buf, state); + return new ScanObjectEntry(b, val); + } + }; + } + + @Override + public Encoder getMapKeyEncoder() { + return delegate.getMapKeyEncoder(); + } + +} diff --git a/redisson/src/main/java/org/redisson/client/codec/ScanCodec.java b/redisson/src/main/java/org/redisson/client/codec/ScanCodec.java index 407eb13ce..d4d0c6335 100644 --- a/redisson/src/main/java/org/redisson/client/codec/ScanCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/ScanCodec.java @@ -33,20 +33,21 @@ import io.netty.buffer.Unpooled; public class ScanCodec implements Codec { private final Codec delegate; - private final Codec mapValueCodec; public ScanCodec(Codec delegate) { - this(delegate, null); - } - - public ScanCodec(Codec delegate, Codec mapValueCodec) { this.delegate = delegate; - this.mapValueCodec = mapValueCodec; } @Override public Decoder getValueDecoder() { - return delegate.getValueDecoder(); + return new Decoder() { + @Override + public Object decode(ByteBuf buf, State state) throws IOException { + ByteBuf b = Unpooled.copiedBuffer(buf); + Object val = delegate.getValueDecoder().decode(buf, state); + return new ScanObjectEntry(b, val); + } + }; } @Override @@ -56,40 +57,17 @@ public class ScanCodec implements Codec { @Override public Decoder getMapValueDecoder() { - return new Decoder() { - @Override - public Object decode(ByteBuf buf, State state) throws IOException { - ByteBuf b = Unpooled.copiedBuffer(buf); - Codec c = delegate; - if (mapValueCodec != null) { - c = mapValueCodec; - } - Object val = c.getMapValueDecoder().decode(buf, state); - return new ScanObjectEntry(b, val); - } - }; + return delegate.getMapValueDecoder(); } @Override public Encoder getMapValueEncoder() { - Codec c = delegate; - if (mapValueCodec != null) { - c = mapValueCodec; - } - - return c.getMapValueEncoder(); + return delegate.getMapValueEncoder(); } @Override public Decoder getMapKeyDecoder() { - return new Decoder() { - @Override - public Object decode(ByteBuf buf, State state) throws IOException { - ByteBuf b = Unpooled.copiedBuffer(buf); - Object val = delegate.getMapKeyDecoder().decode(buf, state); - return new ScanObjectEntry(b, val); - } - }; + return delegate.getMapKeyDecoder(); } @Override diff --git a/redisson/src/main/java/org/redisson/client/codec/ScoredCodec.java b/redisson/src/main/java/org/redisson/client/codec/ScoredCodec.java index 660282681..4e41c17a2 100644 --- a/redisson/src/main/java/org/redisson/client/codec/ScoredCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/ScoredCodec.java @@ -17,6 +17,11 @@ package org.redisson.client.codec; import org.redisson.client.protocol.Encoder; +/** + * + * @author Nikita Koksharov + * + */ public class ScoredCodec extends StringCodec { private final Codec delegate; diff --git a/redisson/src/main/java/org/redisson/client/codec/StringCodec.java b/redisson/src/main/java/org/redisson/client/codec/StringCodec.java index ef5be4232..0a8bc3e26 100644 --- a/redisson/src/main/java/org/redisson/client/codec/StringCodec.java +++ b/redisson/src/main/java/org/redisson/client/codec/StringCodec.java @@ -25,6 +25,11 @@ import org.redisson.client.protocol.Encoder; import io.netty.buffer.ByteBuf; import io.netty.util.CharsetUtil; +/** + * + * @author Nikita Koksharov + * + */ public class StringCodec implements Codec { public static final StringCodec INSTANCE = new StringCodec(); diff --git a/redisson/src/main/java/org/redisson/client/handler/CommandDecoder.java b/redisson/src/main/java/org/redisson/client/handler/CommandDecoder.java index cf98f298c..b5024789f 100644 --- a/redisson/src/main/java/org/redisson/client/handler/CommandDecoder.java +++ b/redisson/src/main/java/org/redisson/client/handler/CommandDecoder.java @@ -36,6 +36,7 @@ import org.redisson.client.protocol.CommandData; import org.redisson.client.protocol.CommandsData; import org.redisson.client.protocol.Decoder; import org.redisson.client.protocol.QueueCommand; +import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.decoder.ListMultiDecoder; import org.redisson.client.protocol.decoder.MultiDecoder; @@ -45,6 +46,7 @@ import org.redisson.client.protocol.pubsub.Message; import org.redisson.client.protocol.pubsub.PubSubMessage; import org.redisson.client.protocol.pubsub.PubSubPatternMessage; import org.redisson.client.protocol.pubsub.PubSubStatusMessage; +import org.redisson.misc.LogHelper; import org.redisson.misc.RPromise; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -203,11 +205,11 @@ public class CommandDecoder extends ReplayingDecoder { RPromise promise = commandBatch.getPromise(); if (error != null) { if (!promise.tryFailure(error) && promise.cause() instanceof RedisTimeoutException) { - log.warn("response has been skipped due to timeout! channel: {}, command: {}", ctx.channel(), data); + log.warn("response has been skipped due to timeout! channel: {}, command: {}",ctx.channel(), LogHelper.toString(data)); } } else { if (!promise.trySuccess(null) && promise.cause() instanceof RedisTimeoutException) { - log.warn("response has been skipped due to timeout! channel: {}, command: {}", ctx.channel(), data); + log.warn("response has been skipped due to timeout! channel: {}, command: {}", ctx.channel(), LogHelper.toString(data)); } } @@ -299,7 +301,8 @@ public class CommandDecoder extends ReplayingDecoder { decodeList(in, data, parts, channel, size, respParts); } else { - throw new IllegalStateException("Can't decode replay " + (char)code); + String dataStr = in.toString(0, in.writerIndex(), CharsetUtil.UTF_8); + throw new IllegalStateException("Can't decode replay: " + dataStr); } } @@ -328,7 +331,7 @@ public class CommandDecoder extends ReplayingDecoder { // store current message index checkpoint(); - handleMultiResult(data, null, channel, result); + handlePublishSubscribe(data, null, channel, result); // has next messages? if (in.writerIndex() > in.readerIndex()) { decode(in, data, null, channel); @@ -336,18 +339,18 @@ public class CommandDecoder extends ReplayingDecoder { } } - private void handleMultiResult(CommandData data, List parts, + private void handlePublishSubscribe(CommandData data, List parts, Channel channel, final Object result) { if (result instanceof PubSubStatusMessage) { String channelName = ((PubSubStatusMessage) result).getChannel(); String operation = ((PubSubStatusMessage) result).getType().name().toLowerCase(); PubSubKey key = new PubSubKey(channelName, operation); CommandData d = pubSubChannels.get(key); - if (Arrays.asList("PSUBSCRIBE", "SUBSCRIBE").contains(d.getCommand().getName())) { + if (Arrays.asList(RedisCommands.PSUBSCRIBE.getName(), RedisCommands.SUBSCRIBE.getName()).contains(d.getCommand().getName())) { pubSubChannels.remove(key); pubSubMessageDecoders.put(channelName, d.getMessageDecoder()); } - if (Arrays.asList("PUNSUBSCRIBE", "UNSUBSCRIBE").contains(d.getCommand().getName())) { + if (Arrays.asList(RedisCommands.PUNSUBSCRIBE.getName(), RedisCommands.UNSUBSCRIBE.getName()).contains(d.getCommand().getName())) { pubSubChannels.remove(key); pubSubMessageDecoders.remove(channelName); } @@ -380,7 +383,7 @@ public class CommandDecoder extends ReplayingDecoder { if (parts != null) { parts.add(result); } else { - if (!data.getPromise().trySuccess(result) && data.cause() instanceof RedisTimeoutException) { + if (data != null && !data.getPromise().trySuccess(result) && data.cause() instanceof RedisTimeoutException) { log.warn("response has been skipped due to timeout! channel: {}, command: {}, result: {}", channel, data, result); } } @@ -388,6 +391,9 @@ public class CommandDecoder extends ReplayingDecoder { private MultiDecoder messageDecoder(CommandData data, List parts, Channel channel) { if (data == null) { + if (parts.isEmpty()) { + return null; + } String command = parts.get(0).toString(); if (Arrays.asList("subscribe", "psubscribe", "punsubscribe", "unsubscribe").contains(command)) { String channelName = parts.get(1).toString(); @@ -411,13 +417,15 @@ public class CommandDecoder extends ReplayingDecoder { private Decoder selectDecoder(CommandData data, List parts) { if (data == null) { - if (parts.size() == 2 && parts.get(0).equals("message")) { - String channelName = (String) parts.get(1); - return pubSubMessageDecoders.get(channelName); - } - if (parts.size() == 3 && parts.get(0).equals("pmessage")) { - String patternName = (String) parts.get(1); - return pubSubMessageDecoders.get(patternName); + if (parts != null) { + if (parts.size() == 2 && "message".equals(parts.get(0))) { + String channelName = (String) parts.get(1); + return pubSubMessageDecoders.get(channelName); + } + if (parts.size() == 3 && "pmessage".equals(parts.get(0))) { + String patternName = (String) parts.get(1); + return pubSubMessageDecoders.get(patternName); + } } return StringCodec.INSTANCE.getValueDecoder(); } diff --git a/redisson/src/main/java/org/redisson/client/protocol/CommandData.java b/redisson/src/main/java/org/redisson/client/protocol/CommandData.java index 6d51b4ee6..7c0ca6bf5 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/CommandData.java +++ b/redisson/src/main/java/org/redisson/client/protocol/CommandData.java @@ -15,12 +15,12 @@ */ package org.redisson.client.protocol; -import java.util.Arrays; import java.util.Collections; import java.util.List; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.decoder.MultiDecoder; +import org.redisson.misc.LogHelper; import org.redisson.misc.RPromise; /** @@ -85,19 +85,19 @@ public class CommandData implements QueueCommand { @Override public String toString() { return "CommandData [promise=" + promise + ", command=" + command + ", params=" - + Arrays.toString(params) + ", codec=" + codec + "]"; + + LogHelper.toString(params) + ", codec=" + codec + "]"; } @Override public List> getPubSubOperations() { - if (PUBSUB_COMMANDS.contains(getCommand().getName())) { + if (RedisCommands.PUBSUB_COMMANDS.contains(getCommand().getName())) { return Collections.singletonList((CommandData)this); } return Collections.emptyList(); } public boolean isBlockingCommand() { - return QueueCommand.TIMEOUTLESS_COMMANDS.contains(command.getName()) && !promise.isDone(); + return RedisCommands.BLOCKING_COMMANDS.contains(command.getName()) && !promise.isDone(); } } diff --git a/redisson/src/main/java/org/redisson/client/protocol/CommandsData.java b/redisson/src/main/java/org/redisson/client/protocol/CommandsData.java index 375e5c9eb..8620cb025 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/CommandsData.java +++ b/redisson/src/main/java/org/redisson/client/protocol/CommandsData.java @@ -58,7 +58,7 @@ public class CommandsData implements QueueCommand { public List> getPubSubOperations() { List> result = new ArrayList>(); for (CommandData commandData : commands) { - if (PUBSUB_COMMANDS.equals(commandData.getCommand().getName())) { + if (RedisCommands.PUBSUB_COMMANDS.equals(commandData.getCommand().getName())) { result.add((CommandData)commandData); } } diff --git a/redisson/src/main/java/org/redisson/client/protocol/QueueCommand.java b/redisson/src/main/java/org/redisson/client/protocol/QueueCommand.java index 153ad7170..69e0745fc 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/QueueCommand.java +++ b/redisson/src/main/java/org/redisson/client/protocol/QueueCommand.java @@ -15,10 +15,7 @@ */ package org.redisson.client.protocol; -import java.util.Arrays; -import java.util.HashSet; import java.util.List; -import java.util.Set; /** * @@ -26,12 +23,7 @@ import java.util.Set; * */ public interface QueueCommand { - - Set PUBSUB_COMMANDS = new HashSet(Arrays.asList("PSUBSCRIBE", "SUBSCRIBE", "PUNSUBSCRIBE", "UNSUBSCRIBE")); - Set TIMEOUTLESS_COMMANDS = new HashSet(Arrays.asList(RedisCommands.BLPOP_VALUE.getName(), - RedisCommands.BRPOP_VALUE.getName(), RedisCommands.BRPOPLPUSH.getName())); - List> getPubSubOperations(); boolean tryFailure(Throwable cause); diff --git a/redisson/src/main/java/org/redisson/client/protocol/RedisCommand.java b/redisson/src/main/java/org/redisson/client/protocol/RedisCommand.java index 1d67de90e..06d16f45a 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/RedisCommand.java +++ b/redisson/src/main/java/org/redisson/client/protocol/RedisCommand.java @@ -22,6 +22,12 @@ import org.redisson.client.protocol.convertor.Convertor; import org.redisson.client.protocol.convertor.EmptyConvertor; import org.redisson.client.protocol.decoder.MultiDecoder; +/** + * + * @author Nikita Koksharov + * + * @param return type + */ public class RedisCommand { public enum ValueType {OBJECT, OBJECTS, MAP_VALUE, MAP_KEY, MAP, BINARY, STRING} diff --git a/redisson/src/main/java/org/redisson/client/protocol/RedisCommands.java b/redisson/src/main/java/org/redisson/client/protocol/RedisCommands.java index 4dbcf6c18..8ba7e5996 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/RedisCommands.java +++ b/redisson/src/main/java/org/redisson/client/protocol/RedisCommands.java @@ -16,6 +16,7 @@ package org.redisson.client.protocol; import java.util.Arrays; +import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Map.Entry; @@ -34,6 +35,7 @@ import org.redisson.client.protocol.convertor.BooleanReplayConvertor; import org.redisson.client.protocol.convertor.DoubleReplayConvertor; import org.redisson.client.protocol.convertor.IntegerReplayConvertor; import org.redisson.client.protocol.convertor.KeyValueConvertor; +import org.redisson.client.protocol.convertor.LongListObjectDecoder; import org.redisson.client.protocol.convertor.LongReplayConvertor; import org.redisson.client.protocol.convertor.TrueReplayConvertor; import org.redisson.client.protocol.convertor.TypeConvertor; @@ -62,6 +64,11 @@ import org.redisson.client.protocol.decoder.StringReplayDecoder; import org.redisson.client.protocol.pubsub.PubSubStatusDecoder; import org.redisson.cluster.ClusterNodeInfo; +/** + * + * @author Nikita Koksharov + * + */ public interface RedisCommands { RedisStrictCommand GEOADD = new RedisStrictCommand("GEOADD", 4); @@ -75,7 +82,7 @@ public interface RedisCommands { RedisStrictCommand GETBIT = new RedisStrictCommand("GETBIT", new BooleanReplayConvertor()); RedisStrictCommand BITS_SIZE = new RedisStrictCommand("STRLEN", new BitsSizeReplayConvertor()); - RedisStrictCommand STRLEN = new RedisStrictCommand("STRLEN", new IntegerReplayConvertor()); + RedisStrictCommand STRLEN = new RedisStrictCommand("STRLEN"); RedisStrictCommand BITCOUNT = new RedisStrictCommand("BITCOUNT"); RedisStrictCommand BITPOS = new RedisStrictCommand("BITPOS", new IntegerReplayConvertor()); RedisStrictCommand SETBIT_VOID = new RedisStrictCommand("SETBIT", new VoidReplayConvertor()); @@ -123,6 +130,7 @@ public interface RedisCommands { RedisCommand> SCAN = new RedisCommand>("SCAN", new NestedMultiDecoder(new ObjectListReplayDecoder(), new ListScanResultReplayDecoder()), ValueType.OBJECT); RedisStrictCommand RANDOM_KEY = new RedisStrictCommand("RANDOMKEY", new StringDataDecoder()); RedisStrictCommand PING = new RedisStrictCommand("PING"); + RedisStrictCommand PING_BOOL = new RedisStrictCommand("PING", new BooleanNotNullReplayConvertor()); RedisStrictCommand UNWATCH = new RedisStrictCommand("UNWATCH", new VoidReplayConvertor()); RedisStrictCommand WATCH = new RedisStrictCommand("WATCH", new VoidReplayConvertor()); @@ -131,6 +139,7 @@ public interface RedisCommands { RedisCommand SADD_BOOL = new RedisCommand("SADD", new BooleanAmountReplayConvertor(), 2, ValueType.OBJECTS); RedisStrictCommand SADD = new RedisStrictCommand("SADD", 2, ValueType.OBJECTS); + RedisCommand> SPOP = new RedisCommand>("SPOP", new ObjectSetReplayDecoder()); RedisCommand SPOP_SINGLE = new RedisCommand("SPOP"); RedisCommand SADD_SINGLE = new RedisCommand("SADD", new BooleanReplayConvertor(), 2); RedisCommand SREM_SINGLE = new RedisCommand("SREM", new BooleanAmountReplayConvertor(), 2, ValueType.OBJECTS); @@ -174,15 +183,23 @@ public interface RedisCommands { RedisCommand BLPOP_VALUE = new RedisCommand("BLPOP", new KeyValueObjectDecoder(), new KeyValueConvertor()); RedisCommand BRPOP_VALUE = new RedisCommand("BRPOP", new KeyValueObjectDecoder(), new KeyValueConvertor()); + Set BLOCKING_COMMANDS = new HashSet( + Arrays.asList(BLPOP_VALUE.getName(), BRPOP_VALUE.getName(), BRPOPLPUSH.getName())); + RedisCommand PFADD = new RedisCommand("PFADD", new BooleanReplayConvertor(), 2); RedisStrictCommand PFCOUNT = new RedisStrictCommand("PFCOUNT"); RedisStrictCommand PFMERGE = new RedisStrictCommand("PFMERGE", new VoidReplayConvertor()); + RedisCommand> SORT_LIST = new RedisCommand>("SORT", new ObjectListReplayDecoder()); + RedisCommand> SORT_SET = new RedisCommand>("SORT", new ObjectSetReplayDecoder()); + RedisCommand SORT_TO = new RedisCommand("SORT", new IntegerReplayConvertor()); + RedisStrictCommand RPOP = new RedisStrictCommand("RPOP"); RedisStrictCommand LPUSH = new RedisStrictCommand("LPUSH", 2, ValueType.OBJECTS); RedisCommand LPUSH_BOOLEAN = new RedisCommand("LPUSH", new TrueReplayConvertor(), 2, ValueType.OBJECTS); RedisStrictCommand LPUSH_VOID = new RedisStrictCommand("LPUSH", new VoidReplayConvertor(), 2); RedisCommand> LRANGE = new RedisCommand>("LRANGE", new ObjectListReplayDecoder()); + RedisCommand> LRANGE_SET = new RedisCommand>("LRANGE", new ObjectSetReplayDecoder()); RedisCommand RPUSH = new RedisCommand("RPUSH", 2, ValueType.OBJECTS); RedisCommand RPUSH_BOOLEAN = new RedisCommand("RPUSH", new TrueReplayConvertor(), 2, ValueType.OBJECTS); RedisCommand RPUSH_VOID = new RedisCommand("RPUSH", new VoidReplayConvertor(), 2, ValueType.OBJECTS); @@ -202,6 +219,7 @@ public interface RedisCommands { RedisStrictCommand EVAL_INTEGER = new RedisStrictCommand("EVAL", new IntegerReplayConvertor()); RedisStrictCommand EVAL_LONG = new RedisStrictCommand("EVAL"); RedisStrictCommand EVAL_VOID = new RedisStrictCommand("EVAL", new VoidReplayConvertor()); + RedisCommand EVAL_VOID_WITH_VALUES = new RedisCommand("EVAL", new VoidReplayConvertor(), 4, ValueType.OBJECTS); RedisCommand EVAL_VOID_WITH_VALUES_6 = new RedisCommand("EVAL", new VoidReplayConvertor(), 6, ValueType.OBJECTS); RedisCommand> EVAL_LIST = new RedisCommand>("EVAL", new ObjectListReplayDecoder()); RedisCommand> EVAL_SET = new RedisCommand>("EVAL", new ObjectSetReplayDecoder()); @@ -209,6 +227,7 @@ public interface RedisCommands { RedisCommand EVAL_MAP_VALUE = new RedisCommand("EVAL", ValueType.MAP_VALUE); RedisCommand>> EVAL_MAP_ENTRY = new RedisCommand>>("EVAL", new ObjectMapEntryReplayDecoder(), ValueType.MAP); RedisCommand> EVAL_MAP_VALUE_LIST = new RedisCommand>("EVAL", new ObjectListReplayDecoder(), ValueType.MAP_VALUE); + RedisCommand> EVAL_MAP_KEY_SET = new RedisCommand>("EVAL", new ObjectSetReplayDecoder(), ValueType.MAP_KEY); RedisStrictCommand INCR = new RedisStrictCommand("INCR"); RedisStrictCommand INCRBY = new RedisStrictCommand("INCRBY"); @@ -253,10 +272,15 @@ public interface RedisCommands { RedisStrictCommand GET_LONG = new RedisStrictCommand("GET", new LongReplayConvertor()); RedisStrictCommand GET_INTEGER = new RedisStrictCommand("GET", new IntegerReplayConvertor()); RedisCommand GETSET = new RedisCommand("GETSET", 2); + RedisCommand GETRANGE = new RedisCommand("GETRANGE"); + RedisCommand APPEND = new RedisCommand("APPEND"); + RedisCommand SETRANGE = new RedisCommand("SETRANGE"); RedisCommand SET = new RedisCommand("SET", new VoidReplayConvertor(), 2); RedisCommand SETPXNX = new RedisCommand("SET", new BooleanNotNullReplayConvertor(), 2); RedisCommand SETNX = new RedisCommand("SETNX", new BooleanReplayConvertor(), 2); RedisCommand SETEX = new RedisCommand("SETEX", new VoidReplayConvertor(), 3); + + RedisStrictCommand EXISTS_LONG = new RedisStrictCommand("EXISTS"); RedisStrictCommand EXISTS = new RedisStrictCommand("EXISTS", new BooleanReplayConvertor()); RedisStrictCommand NOT_EXISTS = new RedisStrictCommand("EXISTS", new BooleanNumberReplayConvertor(1L)); @@ -272,8 +296,11 @@ public interface RedisCommands { RedisCommand PSUBSCRIBE = new RedisCommand("PSUBSCRIBE", new PubSubStatusDecoder()); RedisCommand PUNSUBSCRIBE = new RedisCommand("PUNSUBSCRIBE", new PubSubStatusDecoder()); + Set PUBSUB_COMMANDS = new HashSet( + Arrays.asList(PSUBSCRIBE.getName(), SUBSCRIBE.getName(), PUNSUBSCRIBE.getName(), UNSUBSCRIBE.getName())); + RedisStrictCommand> CLUSTER_NODES = new RedisStrictCommand>("CLUSTER", "NODES", new ClusterNodesDecoder()); - RedisStrictCommand> TIME = new RedisStrictCommand>("TIME", new StringListReplayDecoder()); + RedisCommand TIME = new RedisCommand("TIME", new LongListObjectDecoder()); RedisStrictCommand> CLUSTER_INFO = new RedisStrictCommand>("CLUSTER", "INFO", new StringMapDataDecoder()); RedisStrictCommand> SENTINEL_GET_MASTER_ADDR_BY_NAME = new RedisStrictCommand>("SENTINEL", "GET-MASTER-ADDR-BY-NAME", new StringListReplayDecoder()); @@ -289,9 +316,17 @@ public interface RedisCommands { RedisStrictCommand> CLUSTER_GETKEYSINSLOT = new RedisStrictCommand>("CLUSTER", "GETKEYSINSLOT", new StringListReplayDecoder()); RedisStrictCommand CLUSTER_SETSLOT = new RedisStrictCommand("CLUSTER", "SETSLOT"); RedisStrictCommand CLUSTER_MEET = new RedisStrictCommand("CLUSTER", "MEET"); - RedisStrictCommand> INFO_KEYSPACE = new RedisStrictCommand>("INFO", "KEYSPACE", new StringMapDataDecoder()); + + RedisStrictCommand> INFO_ALL = new RedisStrictCommand>("INFO", "ALL", new StringMapDataDecoder()); + RedisStrictCommand> INFO_DEFAULT = new RedisStrictCommand>("INFO", "DEFAULT", new StringMapDataDecoder()); + RedisStrictCommand> INFO_SERVER = new RedisStrictCommand>("INFO", "SERVER", new StringMapDataDecoder()); + RedisStrictCommand> INFO_CLIENTS = new RedisStrictCommand>("INFO", "CLIENTS", new StringMapDataDecoder()); + RedisStrictCommand> INFO_MEMORY = new RedisStrictCommand>("INFO", "MEMORY", new StringMapDataDecoder()); + RedisStrictCommand> INFO_PERSISTENCE = new RedisStrictCommand>("INFO", "PERSISTENCE", new StringMapDataDecoder()); + RedisStrictCommand> INFO_STATS = new RedisStrictCommand>("INFO", "STATS", new StringMapDataDecoder()); + RedisStrictCommand> INFO_REPLICATION = new RedisStrictCommand>("INFO", "REPLICATION", new StringMapDataDecoder()); + RedisStrictCommand> INFO_CPU = new RedisStrictCommand>("INFO", "CPU", new StringMapDataDecoder()); + RedisStrictCommand> INFO_COMMANDSTATS = new RedisStrictCommand>("INFO", "COMMANDSTATS", new StringMapDataDecoder()); RedisStrictCommand> INFO_CLUSTER = new RedisStrictCommand>("INFO", "CLUSTER", new StringMapDataDecoder()); - RedisStrictCommand INFO_REPLICATION = new RedisStrictCommand("INFO", "replication", new StringDataDecoder()); - RedisStrictCommand> INFO_PERSISTENCE = new RedisStrictCommand>("INFO", "persistence", new StringMapDataDecoder()); - RedisStrictCommand> SERVER_INFO = new RedisStrictCommand>("INFO", "SERVER", new StringMapDataDecoder()); + RedisStrictCommand> INFO_KEYSPACE = new RedisStrictCommand>("INFO", "KEYSPACE", new StringMapDataDecoder()); } diff --git a/redisson/src/main/java/org/redisson/client/protocol/convertor/LongListObjectDecoder.java b/redisson/src/main/java/org/redisson/client/protocol/convertor/LongListObjectDecoder.java new file mode 100644 index 000000000..6df5af8da --- /dev/null +++ b/redisson/src/main/java/org/redisson/client/protocol/convertor/LongListObjectDecoder.java @@ -0,0 +1,40 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.client.protocol.convertor; + +import java.util.List; + +import org.redisson.client.handler.State; +import org.redisson.client.protocol.decoder.ListFirstObjectDecoder; + +/** + * + * @author Nikita Koksharov + * + */ + +public class LongListObjectDecoder extends ListFirstObjectDecoder { + + @Override + public Object decode(List parts, State state) { + Object result = super.decode(parts, state); + if (result != null) { + return Long.valueOf(result.toString()); + } + return result; + } + +} diff --git a/redisson/src/main/java/org/redisson/connection/decoder/ListFirstObjectDecoder.java b/redisson/src/main/java/org/redisson/client/protocol/decoder/ListFirstObjectDecoder.java similarity index 92% rename from redisson/src/main/java/org/redisson/connection/decoder/ListFirstObjectDecoder.java rename to redisson/src/main/java/org/redisson/client/protocol/decoder/ListFirstObjectDecoder.java index 27e7285a9..ded8a9144 100644 --- a/redisson/src/main/java/org/redisson/connection/decoder/ListFirstObjectDecoder.java +++ b/redisson/src/main/java/org/redisson/client/protocol/decoder/ListFirstObjectDecoder.java @@ -13,12 +13,11 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.redisson.connection.decoder; +package org.redisson.client.protocol.decoder; import java.util.List; import org.redisson.client.handler.State; -import org.redisson.client.protocol.decoder.MultiDecoder; import io.netty.buffer.ByteBuf; diff --git a/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapDecoder.java b/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapDecoder.java index 16cdff6ab..54f48c3da 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapDecoder.java +++ b/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapDecoder.java @@ -16,7 +16,7 @@ package org.redisson.client.protocol.decoder; import java.io.IOException; -import java.util.HashMap; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; @@ -25,6 +25,11 @@ import org.redisson.client.handler.State; import io.netty.buffer.ByteBuf; +/** + * + * @author Nikita Koksharov + * + */ public class ObjectMapDecoder implements MultiDecoder> { private Codec codec; @@ -46,7 +51,7 @@ public class ObjectMapDecoder implements MultiDecoder> { @Override public Map decode(List parts, State state) { - Map result = new HashMap(parts.size()/2); + Map result = new LinkedHashMap(parts.size()/2); for (int i = 0; i < parts.size(); i++) { if (i % 2 != 0) { result.put(parts.get(i-1), parts.get(i)); diff --git a/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapEntryReplayDecoder.java b/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapEntryReplayDecoder.java index 6054602f6..3689aaaa0 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapEntryReplayDecoder.java +++ b/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectMapEntryReplayDecoder.java @@ -15,7 +15,6 @@ */ package org.redisson.client.protocol.decoder; -import java.util.HashMap; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; diff --git a/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectSetReplayDecoder.java b/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectSetReplayDecoder.java index c3aebefdb..41defa918 100644 --- a/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectSetReplayDecoder.java +++ b/redisson/src/main/java/org/redisson/client/protocol/decoder/ObjectSetReplayDecoder.java @@ -15,7 +15,6 @@ */ package org.redisson.client.protocol.decoder; -import java.util.HashSet; import java.util.LinkedHashSet; import java.util.List; import java.util.Set; @@ -24,6 +23,12 @@ import org.redisson.client.handler.State; import io.netty.buffer.ByteBuf; +/** + * + * @author Nikita Koksharov + * + * @param value type + */ public class ObjectSetReplayDecoder implements MultiDecoder> { @Override diff --git a/redisson/src/main/java/org/redisson/cluster/ClusterConnectionManager.java b/redisson/src/main/java/org/redisson/cluster/ClusterConnectionManager.java index fcb4143cb..e6c5f7b8b 100644 --- a/redisson/src/main/java/org/redisson/cluster/ClusterConnectionManager.java +++ b/redisson/src/main/java/org/redisson/cluster/ClusterConnectionManager.java @@ -15,7 +15,7 @@ */ package org.redisson.cluster; -import java.net.URI; +import java.net.URL; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; @@ -38,6 +38,7 @@ import org.redisson.client.RedisConnectionException; import org.redisson.client.RedisException; import org.redisson.client.protocol.RedisCommands; import org.redisson.cluster.ClusterNodeInfo.Flag; +import org.redisson.cluster.ClusterPartition.Type; import org.redisson.config.ClusterServersConfig; import org.redisson.config.Config; import org.redisson.config.MasterSlaveServersConfig; @@ -57,28 +58,34 @@ import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.concurrent.ScheduledFuture; import io.netty.util.internal.PlatformDependent; +/** + * + * @author Nikita Koksharov + * + */ public class ClusterConnectionManager extends MasterSlaveConnectionManager { private final Logger log = LoggerFactory.getLogger(getClass()); - private final Map nodeConnections = PlatformDependent.newConcurrentHashMap(); + private final Map nodeConnections = PlatformDependent.newConcurrentHashMap(); private final ConcurrentMap lastPartitions = PlatformDependent.newConcurrentHashMap(); private ScheduledFuture monitorFuture; - private volatile URI lastClusterNode; + private volatile URL lastClusterNode; public ClusterConnectionManager(ClusterServersConfig cfg, Config config) { super(config); connectListener = new ClusterConnectionListener(cfg.getReadMode() != ReadMode.MASTER); this.config = create(cfg); + initTimer(this.config); init(this.config); Throwable lastException = null; List failedMasters = new ArrayList(); - for (URI addr : cfg.getNodeAddresses()) { + for (URL addr : cfg.getNodeAddresses()) { RFuture connectionFuture = connect(cfg, addr); try { RedisConnection connection = connectionFuture.syncUninterruptibly().getNow(); @@ -152,7 +159,7 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { } } - private RFuture connect(ClusterServersConfig cfg, final URI addr) { + private RFuture connect(ClusterServersConfig cfg, final URL addr) { RedisConnection connection = nodeConnections.get(addr); if (connection != null) { return newSucceededFuture(connection); @@ -302,22 +309,22 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { return result; } - private void scheduleClusterChangeCheck(final ClusterServersConfig cfg, final Iterator iterator) { + private void scheduleClusterChangeCheck(final ClusterServersConfig cfg, final Iterator iterator) { monitorFuture = GlobalEventExecutor.INSTANCE.schedule(new Runnable() { @Override public void run() { AtomicReference lastException = new AtomicReference(); - Iterator nodesIterator = iterator; + Iterator nodesIterator = iterator; if (nodesIterator == null) { - List nodes = new ArrayList(); - List slaves = new ArrayList(); + List nodes = new ArrayList(); + List slaves = new ArrayList(); for (ClusterPartition partition : getLastPartitions()) { if (!partition.isMasterFail()) { nodes.add(partition.getMasterAddress()); } - Set partitionSlaves = new HashSet(partition.getSlaveAddresses()); + Set partitionSlaves = new HashSet(partition.getSlaveAddresses()); partitionSlaves.removeAll(partition.getFailedSlaveAddresses()); slaves.addAll(partitionSlaves); } @@ -333,19 +340,23 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { }, cfg.getScanInterval(), TimeUnit.MILLISECONDS); } - private void checkClusterState(final ClusterServersConfig cfg, final Iterator iterator, final AtomicReference lastException) { + private void checkClusterState(final ClusterServersConfig cfg, final Iterator iterator, final AtomicReference lastException) { if (!iterator.hasNext()) { log.error("Can't update cluster state", lastException.get()); scheduleClusterChangeCheck(cfg, null); return; } - final URI uri = iterator.next(); + if (!getShutdownLatch().acquire()) { + return; + } + final URL uri = iterator.next(); RFuture connectionFuture = connect(cfg, uri); connectionFuture.addListener(new FutureListener() { @Override public void operationComplete(Future future) throws Exception { if (!future.isSuccess()) { lastException.set(future.cause()); + getShutdownLatch().release(); checkClusterState(cfg, iterator, lastException); return; } @@ -356,7 +367,7 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { }); } - private void updateClusterState(final ClusterServersConfig cfg, final RedisConnection connection, final Iterator iterator, final URI uri) { + private void updateClusterState(final ClusterServersConfig cfg, final RedisConnection connection, final Iterator iterator, final URL uri) { RFuture> future = connection.async(RedisCommands.CLUSTER_NODES); future.addListener(new FutureListener>() { @Override @@ -364,6 +375,7 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { if (!future.isSuccess()) { log.error("Can't execute CLUSTER_NODES with " + connection.getRedisClient().getAddr(), future.cause()); close(connection); + getShutdownLatch().release(); scheduleClusterChangeCheck(cfg, iterator); return; } @@ -387,6 +399,7 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { public void operationComplete(Future future) throws Exception { checkSlotsMigration(newPartitions, nodesValue.toString()); checkSlotsChange(cfg, newPartitions, nodesValue.toString()); + getShutdownLatch().release(); scheduleClusterChangeCheck(cfg, null); } }); @@ -403,7 +416,7 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { MasterSlaveEntry entry = getEntry(currentPart.getMasterAddr()); // should be invoked first in order to remove stale failedSlaveAddresses - Set addedSlaves = addRemoveSlaves(entry, currentPart, newPart); + Set addedSlaves = addRemoveSlaves(entry, currentPart, newPart); // Do some slaves have changed state from failed to alive? upDownSlaves(entry, currentPart, newPart, addedSlaves); @@ -412,20 +425,20 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { } } - private void upDownSlaves(final MasterSlaveEntry entry, final ClusterPartition currentPart, final ClusterPartition newPart, Set addedSlaves) { - Set aliveSlaves = new HashSet(currentPart.getFailedSlaveAddresses()); + private void upDownSlaves(final MasterSlaveEntry entry, final ClusterPartition currentPart, final ClusterPartition newPart, Set addedSlaves) { + Set aliveSlaves = new HashSet(currentPart.getFailedSlaveAddresses()); aliveSlaves.removeAll(addedSlaves); aliveSlaves.removeAll(newPart.getFailedSlaveAddresses()); - for (URI uri : aliveSlaves) { + for (URL uri : aliveSlaves) { currentPart.removeFailedSlaveAddress(uri); if (entry.slaveUp(uri.getHost(), uri.getPort(), FreezeReason.MANAGER)) { log.info("slave: {} has up for slot ranges: {}", uri, currentPart.getSlotRanges()); } } - Set failedSlaves = new HashSet(newPart.getFailedSlaveAddresses()); + Set failedSlaves = new HashSet(newPart.getFailedSlaveAddresses()); failedSlaves.removeAll(currentPart.getFailedSlaveAddresses()); - for (URI uri : failedSlaves) { + for (URL uri : failedSlaves) { currentPart.addFailedSlaveAddress(uri); if (entry.slaveDown(uri.getHost(), uri.getPort(), FreezeReason.MANAGER)) { log.warn("slave: {} has down for slot ranges: {}", uri, currentPart.getSlotRanges()); @@ -433,11 +446,11 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { } } - private Set addRemoveSlaves(final MasterSlaveEntry entry, final ClusterPartition currentPart, final ClusterPartition newPart) { - Set removedSlaves = new HashSet(currentPart.getSlaveAddresses()); + private Set addRemoveSlaves(final MasterSlaveEntry entry, final ClusterPartition currentPart, final ClusterPartition newPart) { + Set removedSlaves = new HashSet(currentPart.getSlaveAddresses()); removedSlaves.removeAll(newPart.getSlaveAddresses()); - for (URI uri : removedSlaves) { + for (URL uri : removedSlaves) { currentPart.removeSlaveAddress(uri); if (entry.slaveDown(uri.getHost(), uri.getPort(), FreezeReason.MANAGER)) { @@ -445,9 +458,9 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { } } - Set addedSlaves = new HashSet(newPart.getSlaveAddresses()); + Set addedSlaves = new HashSet(newPart.getSlaveAddresses()); addedSlaves.removeAll(currentPart.getSlaveAddresses()); - for (final URI uri : addedSlaves) { + for (final URL uri : addedSlaves) { RFuture future = entry.addSlave(uri.getHost(), uri.getPort()); future.addListener(new FutureListener() { @Override @@ -504,8 +517,8 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { if (!newMasterPart.getMasterAddress().equals(currentPart.getMasterAddress())) { log.info("changing master from {} to {} for {}", currentPart.getMasterAddress(), newMasterPart.getMasterAddress(), slot); - URI newUri = newMasterPart.getMasterAddress(); - URI oldUri = currentPart.getMasterAddress(); + URL newUri = newMasterPart.getMasterAddress(); + URL oldUri = currentPart.getMasterAddress(); changeMaster(slot, newUri.getHost(), newUri.getPort()); @@ -664,35 +677,67 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { } String id = clusterNodeInfo.getNodeId(); + ClusterPartition slavePartition = getPartition(partitions, id); + if (clusterNodeInfo.containsFlag(Flag.SLAVE)) { id = clusterNodeInfo.getSlaveOf(); } - - ClusterPartition partition = partitions.get(id); - if (partition == null) { - partition = new ClusterPartition(id); - partitions.put(id, partition); - } - - if (clusterNodeInfo.containsFlag(Flag.FAIL)) { - if (clusterNodeInfo.containsFlag(Flag.SLAVE)) { - partition.addFailedSlaveAddress(clusterNodeInfo.getAddress()); - } else { - partition.setMasterFail(true); - } - } + ClusterPartition partition = getPartition(partitions, id); if (clusterNodeInfo.containsFlag(Flag.SLAVE)) { + slavePartition.setParent(partition); + partition.addSlaveAddress(clusterNodeInfo.getAddress()); + if (clusterNodeInfo.containsFlag(Flag.FAIL)) { + partition.addFailedSlaveAddress(clusterNodeInfo.getAddress()); + } } else { partition.addSlotRanges(clusterNodeInfo.getSlotRanges()); partition.setMasterAddress(clusterNodeInfo.getAddress()); + partition.setType(Type.MASTER); + if (clusterNodeInfo.containsFlag(Flag.FAIL)) { + partition.setMasterFail(true); + } } } + + addCascadeSlaves(partitions); + return partitions.values(); } + private void addCascadeSlaves(Map partitions) { + Iterator iter = partitions.values().iterator(); + while (iter.hasNext()) { + ClusterPartition cp = iter.next(); + if (cp.getType() != Type.SLAVE) { + continue; + } + + if (cp.getParent() != null && cp.getParent().getType() == Type.MASTER) { + ClusterPartition parent = cp.getParent(); + for (URL addr : cp.getSlaveAddresses()) { + parent.addSlaveAddress(addr); + } + for (URL addr : cp.getFailedSlaveAddresses()) { + parent.addFailedSlaveAddress(addr); + } + } + iter.remove(); + } + } + + private ClusterPartition getPartition(Map partitions, String id) { + ClusterPartition partition = partitions.get(id); + if (partition == null) { + partition = new ClusterPartition(id); + partition.setType(Type.SLAVE); + partitions.put(id, partition); + } + return partition; + } + @Override public void shutdown() { monitorFuture.cancel(true); @@ -708,7 +753,7 @@ public class ClusterConnectionManager extends MasterSlaveConnectionManager { } @Override - public URI getLastClusterNode() { + public URL getLastClusterNode() { return lastClusterNode; } diff --git a/redisson/src/main/java/org/redisson/cluster/ClusterNodeInfo.java b/redisson/src/main/java/org/redisson/cluster/ClusterNodeInfo.java index 241625981..17a78c5ca 100644 --- a/redisson/src/main/java/org/redisson/cluster/ClusterNodeInfo.java +++ b/redisson/src/main/java/org/redisson/cluster/ClusterNodeInfo.java @@ -16,10 +16,11 @@ package org.redisson.cluster; import java.net.URI; +import java.net.URL; import java.util.HashSet; import java.util.Set; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; /** * @@ -33,7 +34,7 @@ public class ClusterNodeInfo { private final String nodeInfo; private String nodeId; - private URI address; + private URL address; private final Set flags = new HashSet(); private String slaveOf; @@ -50,11 +51,11 @@ public class ClusterNodeInfo { this.nodeId = nodeId; } - public URI getAddress() { + public URL getAddress() { return address; } public void setAddress(String address) { - this.address = URIBuilder.create(address); + this.address = URLBuilder.create(address); } public void addSlotRange(ClusterSlotRange range) { diff --git a/redisson/src/main/java/org/redisson/cluster/ClusterPartition.java b/redisson/src/main/java/org/redisson/cluster/ClusterPartition.java index ddfa42b7b..651c178fa 100644 --- a/redisson/src/main/java/org/redisson/cluster/ClusterPartition.java +++ b/redisson/src/main/java/org/redisson/cluster/ClusterPartition.java @@ -16,29 +16,56 @@ package org.redisson.cluster; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.Collections; import java.util.HashSet; import java.util.Set; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; +/** + * + * @author Nikita Koksharov + * + */ public class ClusterPartition { + public enum Type {MASTER, SLAVE} + + private Type type = Type.MASTER; + private final String nodeId; private boolean masterFail; - private URI masterAddress; - private final Set slaveAddresses = new HashSet(); - private final Set failedSlaves = new HashSet(); + private URL masterAddress; + private final Set slaveAddresses = new HashSet(); + private final Set failedSlaves = new HashSet(); private final Set slots = new HashSet(); private final Set slotRanges = new HashSet(); + private ClusterPartition parent; + public ClusterPartition(String nodeId) { super(); this.nodeId = nodeId; } + + public ClusterPartition getParent() { + return parent; + } + + public void setParent(ClusterPartition parent) { + this.parent = parent; + } + public void setType(Type type) { + this.type = type; + } + + public Type getType() { + return type; + } + public String getNodeId() { return nodeId; } @@ -85,33 +112,33 @@ public class ClusterPartition { return new InetSocketAddress(masterAddress.getHost(), masterAddress.getPort()); } - public URI getMasterAddress() { + public URL getMasterAddress() { return masterAddress; } public void setMasterAddress(String masterAddress) { - setMasterAddress(URIBuilder.create(masterAddress)); + setMasterAddress(URLBuilder.create(masterAddress)); } - public void setMasterAddress(URI masterAddress) { + public void setMasterAddress(URL masterAddress) { this.masterAddress = masterAddress; } - public void addFailedSlaveAddress(URI address) { + public void addFailedSlaveAddress(URL address) { failedSlaves.add(address); } - public Set getFailedSlaveAddresses() { + public Set getFailedSlaveAddresses() { return Collections.unmodifiableSet(failedSlaves); } - public void removeFailedSlaveAddress(URI uri) { + public void removeFailedSlaveAddress(URL uri) { failedSlaves.remove(uri); } - public void addSlaveAddress(URI address) { + public void addSlaveAddress(URL address) { slaveAddresses.add(address); } - public Set getSlaveAddresses() { + public Set getSlaveAddresses() { return Collections.unmodifiableSet(slaveAddresses); } - public void removeSlaveAddress(URI uri) { + public void removeSlaveAddress(URL uri) { slaveAddresses.remove(uri); failedSlaves.remove(uri); } diff --git a/redisson/src/main/java/org/redisson/codec/LZ4Codec.java b/redisson/src/main/java/org/redisson/codec/LZ4Codec.java index c0a358261..e87b32a3a 100644 --- a/redisson/src/main/java/org/redisson/codec/LZ4Codec.java +++ b/redisson/src/main/java/org/redisson/codec/LZ4Codec.java @@ -63,7 +63,11 @@ public class LZ4Codec implements Codec { LZ4SafeDecompressor decompressor = factory.safeDecompressor(); bytes = decompressor.decompress(bytes, bytes.length*3); ByteBuf bf = Unpooled.wrappedBuffer(bytes); - return innerCodec.getValueDecoder().decode(bf, state); + try { + return innerCodec.getValueDecoder().decode(bf, state); + } finally { + bf.release(); + } } }; diff --git a/redisson/src/main/java/org/redisson/codec/SnappyCodec.java b/redisson/src/main/java/org/redisson/codec/SnappyCodec.java index 7dea45be8..b4577ad84 100644 --- a/redisson/src/main/java/org/redisson/codec/SnappyCodec.java +++ b/redisson/src/main/java/org/redisson/codec/SnappyCodec.java @@ -57,7 +57,11 @@ public class SnappyCodec implements Codec { buf.readBytes(bytes); bytes = Snappy.uncompress(bytes); ByteBuf bf = Unpooled.wrappedBuffer(bytes); - return innerCodec.getValueDecoder().decode(bf, state); + try { + return innerCodec.getValueDecoder().decode(bf, state); + } finally { + bf.release(); + } } }; diff --git a/redisson/src/main/java/org/redisson/command/CommandAsyncExecutor.java b/redisson/src/main/java/org/redisson/command/CommandAsyncExecutor.java index 49eca78d9..e36726524 100644 --- a/redisson/src/main/java/org/redisson/command/CommandAsyncExecutor.java +++ b/redisson/src/main/java/org/redisson/command/CommandAsyncExecutor.java @@ -50,6 +50,8 @@ public interface CommandAsyncExecutor { boolean await(RFuture RFuture, long timeout, TimeUnit timeoutUnit) throws InterruptedException; + void syncSubscription(RFuture future); + V get(RFuture RFuture); RFuture writeAsync(MasterSlaveEntry entry, Codec codec, RedisCommand command, Object ... params); diff --git a/redisson/src/main/java/org/redisson/command/CommandAsyncService.java b/redisson/src/main/java/org/redisson/command/CommandAsyncService.java index 35d1cb7a8..e7fde3197 100644 --- a/redisson/src/main/java/org/redisson/command/CommandAsyncService.java +++ b/redisson/src/main/java/org/redisson/command/CommandAsyncService.java @@ -20,7 +20,9 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Collections; +import java.util.HashMap; import java.util.List; +import java.util.Map; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; @@ -28,9 +30,12 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.redisson.RedisClientResult; +import org.redisson.RedissonReference; import org.redisson.RedissonShutdownException; import org.redisson.SlotCallback; import org.redisson.api.RFuture; +import org.redisson.api.RedissonClient; +import org.redisson.api.RedissonReactiveClient; import org.redisson.client.RedisAskException; import org.redisson.client.RedisConnection; import org.redisson.client.RedisException; @@ -42,14 +47,20 @@ import org.redisson.client.WriteRedisConnectionException; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.CommandData; import org.redisson.client.protocol.CommandsData; -import org.redisson.client.protocol.QueueCommand; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommands; +import org.redisson.client.protocol.ScoredEntry; +import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.MapScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; +import org.redisson.config.MasterSlaveServersConfig; import org.redisson.connection.ConnectionManager; import org.redisson.connection.MasterSlaveEntry; import org.redisson.connection.NodeSource; import org.redisson.connection.NodeSource.Redirect; +import org.redisson.misc.LogHelper; import org.redisson.misc.RPromise; +import org.redisson.misc.RedissonObjectFactory; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -60,16 +71,6 @@ import io.netty.util.Timeout; import io.netty.util.TimerTask; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; -import java.util.HashMap; -import java.util.Map; -import org.redisson.RedissonReference; -import org.redisson.api.RedissonClient; -import org.redisson.api.RedissonReactiveClient; -import org.redisson.client.protocol.ScoredEntry; -import org.redisson.client.protocol.decoder.ListScanResult; -import org.redisson.client.protocol.decoder.MapScanResult; -import org.redisson.client.protocol.decoder.ScanObjectEntry; -import org.redisson.misc.RedissonObjectFactory; /** * @@ -116,6 +117,20 @@ public class CommandAsyncService implements CommandAsyncExecutor { return redisson != null || redissonReactive != null; } + @Override + public void syncSubscription(RFuture future) { + MasterSlaveServersConfig config = connectionManager.getConfig(); + try { + int timeout = config.getTimeout() + config.getRetryInterval()*config.getRetryAttempts(); + if (!future.await(timeout)) { + throw new RedisTimeoutException("Subscribe timeout: (" + timeout + "ms)"); + } + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } + future.syncUninterruptibly(); + } + @Override public V get(RFuture future) { if (!future.isDone()) { @@ -520,7 +535,7 @@ public class CommandAsyncService implements CommandAsyncExecutor { if (details.getAttempt() == connectionManager.getConfig().getRetryAttempts()) { if (details.getException() == null) { - details.setException(new RedisTimeoutException("Command execution timeout for command: " + command + " with params: " + Arrays.toString(details.getParams()))); + details.setException(new RedisTimeoutException("Command execution timeout for command: " + command + " with params: " + LogHelper.toString(details.getParams()))); } details.getAttemptPromise().tryFailure(details.getException()); return; @@ -605,14 +620,14 @@ public class CommandAsyncService implements CommandAsyncExecutor { if (!future.isSuccess()) { details.setException(new WriteRedisConnectionException( - "Can't write command: " + details.getCommand() + ", params: " + Arrays.toString(details.getParams()) + " to channel: " + future.channel(), future.cause())); + "Can't write command: " + details.getCommand() + ", params: " + LogHelper.toString(details.getParams()) + " to channel: " + future.channel(), future.cause())); return; } details.getTimeout().cancel(); long timeoutTime = connectionManager.getConfig().getTimeout(); - if (QueueCommand.TIMEOUTLESS_COMMANDS.contains(details.getCommand().getName())) { + if (RedisCommands.BLOCKING_COMMANDS.contains(details.getCommand().getName())) { Long popTimeout = Long.valueOf(details.getParams()[details.getParams().length - 1].toString()); handleBlockingOperations(details, connection, popTimeout); if (popTimeout == 0) { @@ -629,7 +644,7 @@ public class CommandAsyncService implements CommandAsyncExecutor { public void run(Timeout timeout) throws Exception { details.getAttemptPromise().tryFailure( new RedisTimeoutException("Redis server response timeout (" + timeoutAmount + " ms) occured for command: " + details.getCommand() - + " with params: " + Arrays.toString(details.getParams()) + " channel: " + connection.getChannel())); + + " with params: " + LogHelper.toString(details.getParams()) + " channel: " + connection.getChannel())); } }; @@ -789,22 +804,51 @@ public class CommandAsyncService implements CommandAsyncExecutor { } private void handleReference(RPromise mainPromise, R res) { - if (res instanceof List || res instanceof ListScanResult) { - List r = res instanceof ListScanResult ? ((ListScanResult)res).getValues() : (List) res; + if (res instanceof List) { + List r = (List)res; for (int i = 0; i < r.size(); i++) { if (r.get(i) instanceof RedissonReference) { try { - r.set(i ,(redisson != null - ? RedissonObjectFactory.fromReference(redisson, (RedissonReference) r.get(i)) - : RedissonObjectFactory.fromReference(redissonReactive, (RedissonReference) r.get(i)))); + r.set(i, redisson != null + ? RedissonObjectFactory.fromReference(redisson, (RedissonReference) r.get(i)) + : RedissonObjectFactory.fromReference(redissonReactive, (RedissonReference) r.get(i))); } catch (Exception exception) {//skip and carry on to next one. } } else if (r.get(i) instanceof ScoredEntry && ((ScoredEntry) r.get(i)).getValue() instanceof RedissonReference) { try { - ScoredEntry se = ((ScoredEntry) r.get(i)); - r.set(i ,new ScoredEntry(se.getScore(), redisson != null + ScoredEntry se = ((ScoredEntry) r.get(i)); + se = new ScoredEntry(se.getScore(), redisson != null + ? RedissonObjectFactory.fromReference(redisson, (RedissonReference) se.getValue()) + : RedissonObjectFactory.fromReference(redissonReactive, (RedissonReference) se.getValue())); + r.set(i, se); + } catch (Exception exception) {//skip and carry on to next one. + } + } + } + mainPromise.trySuccess(res); + } else if (res instanceof ListScanResult) { + List r = ((ListScanResult)res).getValues(); + for (int i = 0; i < r.size(); i++) { + Object obj = r.get(i); + if (!(obj instanceof ScanObjectEntry)) { + break; + } + ScanObjectEntry e = r.get(i); + if (e.getObj() instanceof RedissonReference) { + try { + r.set(i , new ScanObjectEntry(e.getBuf(), redisson != null + ? RedissonObjectFactory.fromReference(redisson, (RedissonReference) e.getObj()) + : RedissonObjectFactory.fromReference(redissonReactive, (RedissonReference) e.getObj()))); + } catch (Exception exception) {//skip and carry on to next one. + } + } else if (e.getObj() instanceof ScoredEntry && ((ScoredEntry) e.getObj()).getValue() instanceof RedissonReference) { + try { + ScoredEntry se = ((ScoredEntry) e.getObj()); + se = new ScoredEntry(se.getScore(), redisson != null ? RedissonObjectFactory.fromReference(redisson, (RedissonReference) se.getValue()) - : RedissonObjectFactory.fromReference(redissonReactive, (RedissonReference) se.getValue()))); + : RedissonObjectFactory.fromReference(redissonReactive, (RedissonReference) se.getValue())); + + r.set(i, new ScanObjectEntry(e.getBuf(), se)); } catch (Exception exception) {//skip and carry on to next one. } } diff --git a/redisson/src/main/java/org/redisson/config/BaseMasterSlaveServersConfig.java b/redisson/src/main/java/org/redisson/config/BaseMasterSlaveServersConfig.java index 6f9f23984..9965501c8 100644 --- a/redisson/src/main/java/org/redisson/config/BaseMasterSlaveServersConfig.java +++ b/redisson/src/main/java/org/redisson/config/BaseMasterSlaveServersConfig.java @@ -44,22 +44,22 @@ public class BaseMasterSlaveServersConfigeach slave node */ - private int slaveConnectionMinimumIdleSize = 5; + private int slaveConnectionMinimumIdleSize = 10; /** * Redis 'slave' node maximum connection pool size for each slave node */ - private int slaveConnectionPoolSize = 250; + private int slaveConnectionPoolSize = 64; /** * Redis 'master' node minimum idle connection amount for each slave node */ - private int masterConnectionMinimumIdleSize = 5; + private int masterConnectionMinimumIdleSize = 10; /** * Redis 'master' node maximum connection pool size */ - private int masterConnectionPoolSize = 250; + private int masterConnectionPoolSize = 64; private ReadMode readMode = ReadMode.SLAVE; @@ -81,7 +81,7 @@ public class BaseMasterSlaveServersConfigeach slave node. *

- * Default is 250 + * Default is 64 *

* @see #setSlaveConnectionMinimumIdleSize(int) * @@ -99,7 +99,7 @@ public class BaseMasterSlaveServersConfig - * Default is 250 + * Default is 64 * * @see #setMasterConnectionMinimumIdleSize(int) * @@ -155,7 +155,7 @@ public class BaseMasterSlaveServersConfigeach slave node *

- * Default is 5 + * Default is 10 *

* @see #setSlaveConnectionPoolSize(int) * @@ -173,7 +173,7 @@ public class BaseMasterSlaveServersConfigeach slave node *

- * Default is 5 + * Default is 10 *

* @see #setMasterConnectionPoolSize(int) * diff --git a/redisson/src/main/java/org/redisson/config/ClusterServersConfig.java b/redisson/src/main/java/org/redisson/config/ClusterServersConfig.java index dbe81d257..03ac2317a 100644 --- a/redisson/src/main/java/org/redisson/config/ClusterServersConfig.java +++ b/redisson/src/main/java/org/redisson/config/ClusterServersConfig.java @@ -15,18 +15,23 @@ */ package org.redisson.config; -import java.net.URI; +import java.net.URL; import java.util.ArrayList; import java.util.List; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; +/** + * + * @author Nikita Koksharov + * + */ public class ClusterServersConfig extends BaseMasterSlaveServersConfig { /** * Redis cluster node urls list */ - private List nodeAddresses = new ArrayList(); + private List nodeAddresses = new ArrayList(); /** * Redis cluster scan interval in milliseconds @@ -50,14 +55,14 @@ public class ClusterServersConfig extends BaseMasterSlaveServersConfig getNodeAddresses() { + public List getNodeAddresses() { return nodeAddresses; } - void setNodeAddresses(List nodeAddresses) { + void setNodeAddresses(List nodeAddresses) { this.nodeAddresses = nodeAddresses; } diff --git a/redisson/src/main/java/org/redisson/config/Config.java b/redisson/src/main/java/org/redisson/config/Config.java index 0992b1e69..233778dc0 100644 --- a/redisson/src/main/java/org/redisson/config/Config.java +++ b/redisson/src/main/java/org/redisson/config/Config.java @@ -49,6 +49,8 @@ public class Config { private ElasticacheServersConfig elasticacheServersConfig; + private ReplicatedServersConfig replicatedServersConfig; + /** * Threads amount shared between all redis node clients */ @@ -117,6 +119,9 @@ public class Config { if (oldConf.getElasticacheServersConfig() != null) { setElasticacheServersConfig(new ElasticacheServersConfig(oldConf.getElasticacheServersConfig())); } + if (oldConf.getReplicatedServersConfig() != null) { + setReplicatedServersConfig(new ReplicatedServersConfig(oldConf.getReplicatedServersConfig())); + } } @@ -214,6 +219,7 @@ public class Config { checkSentinelServersConfig(); checkSingleServerConfig(); checkElasticacheServersConfig(); + checkReplicatedServersConfig(); if (clusterServersConfig == null) { clusterServersConfig = config; @@ -230,10 +236,12 @@ public class Config { } /** - * Init AWS Elasticache servers configuration. * - * @return ElasticacheServersConfig + * Use {@link #useReplicatedServers()} + * + * @return config object */ + @Deprecated public ElasticacheServersConfig useElasticacheServers() { return useElasticacheServers(new ElasticacheServersConfig()); } @@ -258,6 +266,37 @@ public class Config { this.elasticacheServersConfig = elasticacheServersConfig; } + /** + * Init Replicated servers configuration. + * Most used with Azure Redis Cache or AWS Elasticache + * + * @return ReplicatedServersConfig + */ + public ReplicatedServersConfig useReplicatedServers() { + return useReplicatedServers(new ReplicatedServersConfig()); + } + + ReplicatedServersConfig useReplicatedServers(ReplicatedServersConfig config) { + checkClusterServersConfig(); + checkMasterSlaveServersConfig(); + checkSentinelServersConfig(); + checkSingleServerConfig(); + checkElasticacheServersConfig(); + + if (replicatedServersConfig == null) { + replicatedServersConfig = new ReplicatedServersConfig(); + } + return replicatedServersConfig; + } + + ReplicatedServersConfig getReplicatedServersConfig() { + return replicatedServersConfig; + } + + void setReplicatedServersConfig(ReplicatedServersConfig replicatedServersConfig) { + this.replicatedServersConfig = replicatedServersConfig; + } + /** * Init single server configuration. * @@ -272,6 +311,7 @@ public class Config { checkMasterSlaveServersConfig(); checkSentinelServersConfig(); checkElasticacheServersConfig(); + checkReplicatedServersConfig(); if (singleServerConfig == null) { singleServerConfig = config; @@ -301,6 +341,7 @@ public class Config { checkSingleServerConfig(); checkMasterSlaveServersConfig(); checkElasticacheServersConfig(); + checkReplicatedServersConfig(); if (this.sentinelServersConfig == null) { this.sentinelServersConfig = sentinelServersConfig; @@ -330,6 +371,7 @@ public class Config { checkSingleServerConfig(); checkSentinelServersConfig(); checkElasticacheServersConfig(); + checkReplicatedServersConfig(); if (masterSlaveServersConfig == null) { masterSlaveServersConfig = config; @@ -400,6 +442,12 @@ public class Config { } } + private void checkReplicatedServersConfig() { + if (replicatedServersConfig != null) { + throw new IllegalStateException("Replication servers config already used!"); + } + } + /** * Activates an unix socket if servers binded to loopback interface. * Also used for epoll transport activation. @@ -440,7 +488,9 @@ public class Config { * Use external ExecutorService. ExecutorService processes * all listeners of RTopic, * RRemoteService invocation handlers - * and RExecutorService tasks. + * and RExecutorService tasks. + *

+ * The caller is responsible for closing the ExecutorService. * * @param executor object * @return config @@ -463,6 +513,8 @@ public class Config { *

* Only {@link io.netty.channel.epoll.EpollEventLoopGroup} or * {@link io.netty.channel.nio.NioEventLoopGroup} can be used. + *

+ * The caller is responsible for closing the EventLoopGroup. * * @param eventLoopGroup object * @return config diff --git a/redisson/src/main/java/org/redisson/config/ConfigSupport.java b/redisson/src/main/java/org/redisson/config/ConfigSupport.java index 2198c94a0..8d08feb1d 100644 --- a/redisson/src/main/java/org/redisson/config/ConfigSupport.java +++ b/redisson/src/main/java/org/redisson/config/ConfigSupport.java @@ -28,6 +28,7 @@ import org.redisson.client.codec.Codec; import org.redisson.cluster.ClusterConnectionManager; import org.redisson.connection.ConnectionManager; import org.redisson.connection.ElasticacheConnectionManager; +import org.redisson.connection.ReplicatedConnectionManager; import org.redisson.connection.MasterSlaveConnectionManager; import org.redisson.connection.SentinelConnectionManager; import org.redisson.connection.SingleConnectionManager; @@ -47,7 +48,13 @@ import com.fasterxml.jackson.databind.ser.impl.SimpleFilterProvider; import com.fasterxml.jackson.dataformat.yaml.YAMLFactory; import org.redisson.codec.CodecProvider; import org.redisson.liveobject.provider.ResolverProvider; +import org.redisson.misc.URLBuilder; +/** + * + * @author Nikita Koksharov + * + */ public class ConfigSupport { @JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, property = "class") @@ -106,57 +113,120 @@ public class ConfigSupport { @JsonProperty ElasticacheServersConfig elasticacheServersConfig; + @JsonProperty + ReplicatedServersConfig replicatedServersConfig; + } private final ObjectMapper jsonMapper = createMapper(null); private final ObjectMapper yamlMapper = createMapper(new YAMLFactory()); public T fromJSON(String content, Class configType) throws IOException { - return jsonMapper.readValue(content, configType); + URLBuilder.replaceURLFactory(); + try { + return jsonMapper.readValue(content, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromJSON(File file, Class configType) throws IOException { - return jsonMapper.readValue(file, configType); + URLBuilder.replaceURLFactory(); + try { + return jsonMapper.readValue(file, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromJSON(URL url, Class configType) throws IOException { - return jsonMapper.readValue(url, configType); + URLBuilder.replaceURLFactory(); + try { + return jsonMapper.readValue(url, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromJSON(Reader reader, Class configType) throws IOException { - return jsonMapper.readValue(reader, configType); + URLBuilder.replaceURLFactory(); + try { + return jsonMapper.readValue(reader, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromJSON(InputStream inputStream, Class configType) throws IOException { - return jsonMapper.readValue(inputStream, configType); + URLBuilder.replaceURLFactory(); + try { + return jsonMapper.readValue(inputStream, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public String toJSON(Config config) throws IOException { - return jsonMapper.writeValueAsString(config); + URLBuilder.replaceURLFactory(); + try { + return jsonMapper.writeValueAsString(config); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromYAML(String content, Class configType) throws IOException { - return yamlMapper.readValue(content, configType); + URLBuilder.replaceURLFactory(); + try { + return yamlMapper.readValue(content, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromYAML(File file, Class configType) throws IOException { - return yamlMapper.readValue(file, configType); + URLBuilder.replaceURLFactory(); + try { + return yamlMapper.readValue(file, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromYAML(URL url, Class configType) throws IOException { - return yamlMapper.readValue(url, configType); + URLBuilder.replaceURLFactory(); + try { + return yamlMapper.readValue(url, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromYAML(Reader reader, Class configType) throws IOException { - return yamlMapper.readValue(reader, configType); + URLBuilder.replaceURLFactory(); + try { + return yamlMapper.readValue(reader, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public T fromYAML(InputStream inputStream, Class configType) throws IOException { - return yamlMapper.readValue(inputStream, configType); + URLBuilder.replaceURLFactory(); + try { + return yamlMapper.readValue(inputStream, configType); + } finally { + URLBuilder.restoreURLFactory(); + } } public String toYAML(Config config) throws IOException { - return yamlMapper.writeValueAsString(config); + URLBuilder.replaceURLFactory(); + try { + return yamlMapper.writeValueAsString(config); + } finally { + URLBuilder.restoreURLFactory(); + } } public static ConnectionManager createConnectionManager(Config configCopy) { @@ -175,6 +245,9 @@ public class ConfigSupport { } else if (configCopy.getElasticacheServersConfig() != null) { validate(configCopy.getElasticacheServersConfig()); return new ElasticacheConnectionManager(configCopy.getElasticacheServersConfig(), configCopy); + } else if (configCopy.getReplicatedServersConfig() != null) { + validate(configCopy.getReplicatedServersConfig()); + return new ReplicatedConnectionManager(configCopy.getReplicatedServersConfig(), configCopy); } else { throw new IllegalArgumentException("server(s) address(es) not defined!"); } diff --git a/redisson/src/main/java/org/redisson/config/ElasticacheServersConfig.java b/redisson/src/main/java/org/redisson/config/ElasticacheServersConfig.java index 8e35a5d50..c57524add 100644 --- a/redisson/src/main/java/org/redisson/config/ElasticacheServersConfig.java +++ b/redisson/src/main/java/org/redisson/config/ElasticacheServersConfig.java @@ -15,91 +15,17 @@ */ package org.redisson.config; -import java.net.URI; -import java.util.ArrayList; -import java.util.List; - -import org.redisson.misc.URIBuilder; - /** - * Configuration for an AWS ElastiCache replication group. A replication group is composed - * of a single master endpoint and multiple read slaves. - * - * @author Steve Ungerer + * Use {@link org.redisson.config.ReplicatedServersConfig} */ -public class ElasticacheServersConfig extends BaseMasterSlaveServersConfig { - - /** - * Replication group node urls list - */ - private List nodeAddresses = new ArrayList(); - - /** - * Replication group scan interval in milliseconds - */ - private int scanInterval = 1000; - - /** - * Database index used for Redis connection - */ - private int database = 0; +@Deprecated +public class ElasticacheServersConfig extends ReplicatedServersConfig { public ElasticacheServersConfig() { } - - ElasticacheServersConfig(ElasticacheServersConfig config) { + + public ElasticacheServersConfig(ReplicatedServersConfig config) { super(config); - setNodeAddresses(config.getNodeAddresses()); - setScanInterval(config.getScanInterval()); - setDatabase(config.getDatabase()); - } - - /** - * Add Redis cluster node address. Use follow format -- host:port - * - * @param addresses in host:port format - * @return config - */ - public ElasticacheServersConfig addNodeAddress(String ... addresses) { - for (String address : addresses) { - nodeAddresses.add(URIBuilder.create(address)); - } - return this; - } - public List getNodeAddresses() { - return nodeAddresses; - } - void setNodeAddresses(List nodeAddresses) { - this.nodeAddresses = nodeAddresses; - } - - public int getScanInterval() { - return scanInterval; - } - /** - * Elasticache node scan interval in milliseconds - * - * @param scanInterval in milliseconds - * @return config - */ - public ElasticacheServersConfig setScanInterval(int scanInterval) { - this.scanInterval = scanInterval; - return this; - } - - /** - * Database index used for Redis connection - * Default is 0 - * - * @param database number - * @return config - */ - public ElasticacheServersConfig setDatabase(int database) { - this.database = database; - return this; - } - public int getDatabase() { - return database; } } diff --git a/redisson/src/main/java/org/redisson/config/MasterSlaveServersConfig.java b/redisson/src/main/java/org/redisson/config/MasterSlaveServersConfig.java index 1cb5a7485..9a5023a94 100644 --- a/redisson/src/main/java/org/redisson/config/MasterSlaveServersConfig.java +++ b/redisson/src/main/java/org/redisson/config/MasterSlaveServersConfig.java @@ -15,25 +15,28 @@ */ package org.redisson.config; -import java.net.URI; -import java.util.Collections; +import java.net.URL; import java.util.HashSet; -import java.util.List; import java.util.Set; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; +/** + * + * @author Nikita Koksharov + * + */ public class MasterSlaveServersConfig extends BaseMasterSlaveServersConfig { /** * Redis slave servers addresses */ - private Set slaveAddresses = new HashSet(); + private Set slaveAddresses = new HashSet(); /** * Redis master server address */ - private List masterAddress; + private URL masterAddress; /** * Database index used for Redis connection @@ -59,19 +62,19 @@ public class MasterSlaveServersConfig extends BaseMasterSlaveServersConfig getSlaveAddresses() { + public Set getSlaveAddresses() { return slaveAddresses; } - public void setSlaveAddresses(Set readAddresses) { + public void setSlaveAddresses(Set readAddresses) { this.slaveAddresses = readAddresses; } diff --git a/redisson/src/main/java/org/redisson/config/ReplicatedServersConfig.java b/redisson/src/main/java/org/redisson/config/ReplicatedServersConfig.java new file mode 100644 index 000000000..541befbd6 --- /dev/null +++ b/redisson/src/main/java/org/redisson/config/ReplicatedServersConfig.java @@ -0,0 +1,106 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.config; + +import java.net.URL; +import java.util.ArrayList; +import java.util.List; + +import org.redisson.misc.URLBuilder; + +/** + * Configuration for an Azure Redis Cache or AWS ElastiCache servers. + * A replication group is composed of a single master endpoint and multiple read slaves. + * + * @author Steve Ungerer + * @author Nikita Koksharov + */ +public class ReplicatedServersConfig extends BaseMasterSlaveServersConfig { + + /** + * Replication group node urls list + */ + private List nodeAddresses = new ArrayList(); + + /** + * Replication group scan interval in milliseconds + */ + private int scanInterval = 1000; + + /** + * Database index used for Redis connection + */ + private int database = 0; + + public ReplicatedServersConfig() { + } + + ReplicatedServersConfig(ReplicatedServersConfig config) { + super(config); + setNodeAddresses(config.getNodeAddresses()); + setScanInterval(config.getScanInterval()); + setDatabase(config.getDatabase()); + } + + /** + * Add Redis cluster node address. Use follow format -- host:port + * + * @param addresses in host:port format + * @return config + */ + public ReplicatedServersConfig addNodeAddress(String ... addresses) { + for (String address : addresses) { + nodeAddresses.add(URLBuilder.create(address)); + } + return this; + } + public List getNodeAddresses() { + return nodeAddresses; + } + void setNodeAddresses(List nodeAddresses) { + this.nodeAddresses = nodeAddresses; + } + + public int getScanInterval() { + return scanInterval; + } + /** + * Elasticache node scan interval in milliseconds + * + * @param scanInterval in milliseconds + * @return config + */ + public ReplicatedServersConfig setScanInterval(int scanInterval) { + this.scanInterval = scanInterval; + return this; + } + + /** + * Database index used for Redis connection + * Default is 0 + * + * @param database number + * @return config + */ + public ReplicatedServersConfig setDatabase(int database) { + this.database = database; + return this; + } + public int getDatabase() { + return database; + } + +} diff --git a/redisson/src/main/java/org/redisson/config/SentinelServersConfig.java b/redisson/src/main/java/org/redisson/config/SentinelServersConfig.java index c86abd8cf..f52741045 100644 --- a/redisson/src/main/java/org/redisson/config/SentinelServersConfig.java +++ b/redisson/src/main/java/org/redisson/config/SentinelServersConfig.java @@ -15,15 +15,20 @@ */ package org.redisson.config; -import java.net.URI; +import java.net.URL; import java.util.ArrayList; import java.util.List; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; +/** + * + * @author Nikita Koksharov + * + */ public class SentinelServersConfig extends BaseMasterSlaveServersConfig { - private List sentinelAddresses = new ArrayList(); + private List sentinelAddresses = new ArrayList(); private String masterName; @@ -64,14 +69,14 @@ public class SentinelServersConfig extends BaseMasterSlaveServersConfig getSentinelAddresses() { + public List getSentinelAddresses() { return sentinelAddresses; } - void setSentinelAddresses(List sentinelAddresses) { + void setSentinelAddresses(List sentinelAddresses) { this.sentinelAddresses = sentinelAddresses; } diff --git a/redisson/src/main/java/org/redisson/config/SingleServerConfig.java b/redisson/src/main/java/org/redisson/config/SingleServerConfig.java index 5aaece686..d5707ed6b 100644 --- a/redisson/src/main/java/org/redisson/config/SingleServerConfig.java +++ b/redisson/src/main/java/org/redisson/config/SingleServerConfig.java @@ -15,11 +15,9 @@ */ package org.redisson.config; -import java.net.URI; -import java.util.Collections; -import java.util.List; +import java.net.URL; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; /** * @@ -32,7 +30,7 @@ public class SingleServerConfig extends BaseConfig { * Redis server address * */ - private List address; + private URL address; /** * Minimum idle subscription connection amount @@ -48,12 +46,12 @@ public class SingleServerConfig extends BaseConfig { /** * Minimum idle Redis connection amount */ - private int connectionMinimumIdleSize = 5; + private int connectionMinimumIdleSize = 10; /** * Redis connection maximum pool size */ - private int connectionPoolSize = 250; + private int connectionPoolSize = 64; /** * Database index used for Redis connection @@ -92,7 +90,7 @@ public class SingleServerConfig extends BaseConfig { /** * Redis connection pool size *

- * Default is 250 + * Default is 64 * * @param connectionPoolSize - pool size * @return config @@ -129,19 +127,19 @@ public class SingleServerConfig extends BaseConfig { */ public SingleServerConfig setAddress(String address) { if (address != null) { - this.address = Collections.singletonList(URIBuilder.create(address)); + this.address = URLBuilder.create(address); } return this; } - public URI getAddress() { + public URL getAddress() { if (address != null) { - return address.get(0); + return address; } return null; } - void setAddress(URI address) { + void setAddress(URL address) { if (address != null) { - this.address = Collections.singletonList(address); + this.address = address; } } @@ -197,7 +195,7 @@ public class SingleServerConfig extends BaseConfig { /** * Minimum idle Redis connection amount. *

- * Default is 5 + * Default is 10 * * @param connectionMinimumIdleSize - connections amount * @return config diff --git a/redisson/src/main/java/org/redisson/connection/ClientConnectionsEntry.java b/redisson/src/main/java/org/redisson/connection/ClientConnectionsEntry.java index e0d869a23..86302a25f 100644 --- a/redisson/src/main/java/org/redisson/connection/ClientConnectionsEntry.java +++ b/redisson/src/main/java/org/redisson/connection/ClientConnectionsEntry.java @@ -27,12 +27,12 @@ import org.redisson.client.RedisConnection; import org.redisson.client.RedisPubSubConnection; import org.redisson.config.MasterSlaveServersConfig; import org.redisson.misc.RPromise; +import org.redisson.pubsub.AsyncSemaphore; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; -import io.netty.util.concurrent.Promise; public class ClientConnectionsEntry { @@ -40,10 +40,10 @@ public class ClientConnectionsEntry { private final Queue allSubscribeConnections = new ConcurrentLinkedQueue(); private final Queue freeSubscribeConnections = new ConcurrentLinkedQueue(); - private final AtomicInteger freeSubscribeConnectionsCounter = new AtomicInteger(); + private final AsyncSemaphore freeSubscribeConnectionsCounter; private final Queue freeConnections = new ConcurrentLinkedQueue(); - private final AtomicInteger freeConnectionsCounter = new AtomicInteger(); + private final AsyncSemaphore freeConnectionsCounter; public enum FreezeReason {MANAGER, RECONNECT, SYSTEM} @@ -59,10 +59,10 @@ public class ClientConnectionsEntry { public ClientConnectionsEntry(RedisClient client, int poolMinSize, int poolMaxSize, int subscribePoolMinSize, int subscribePoolMaxSize, ConnectionManager connectionManager, NodeType serverMode) { this.client = client; - this.freeConnectionsCounter.set(poolMaxSize); + this.freeConnectionsCounter = new AsyncSemaphore(poolMaxSize); this.connectionManager = connectionManager; this.nodeType = serverMode; - this.freeSubscribeConnectionsCounter.set(subscribePoolMaxSize); + this.freeSubscribeConnectionsCounter = new AsyncSemaphore(subscribePoolMaxSize); if (subscribePoolMaxSize > 0) { connectionManager.getConnectionWatcher().add(subscribePoolMinSize, subscribePoolMaxSize, freeSubscribeConnections, freeSubscribeConnectionsCounter); @@ -107,27 +107,19 @@ public class ClientConnectionsEntry { } public int getFreeAmount() { - return freeConnectionsCounter.get(); + return freeConnectionsCounter.getCounter(); } - private boolean tryAcquire(AtomicInteger counter) { - while (true) { - int value = counter.get(); - if (value == 0) { - return false; - } - if (counter.compareAndSet(value, value - 1)) { - return true; - } - } + public void acquireConnection(Runnable runnable) { + freeConnectionsCounter.acquire(runnable); } - - public boolean tryAcquireConnection() { - return tryAcquire(freeConnectionsCounter); + + public void removeConnection(Runnable runnable) { + freeConnectionsCounter.remove(runnable); } public void releaseConnection() { - freeConnectionsCounter.incrementAndGet(); + freeConnectionsCounter.release(); } public RedisConnection pollConnection() { @@ -228,12 +220,12 @@ public class ClientConnectionsEntry { freeSubscribeConnections.add(connection); } - public boolean tryAcquireSubscribeConnection() { - return tryAcquire(freeSubscribeConnectionsCounter); + public void acquireSubscribeConnection(Runnable runnable) { + freeSubscribeConnectionsCounter.acquire(runnable); } public void releaseSubscribeConnection() { - freeSubscribeConnectionsCounter.incrementAndGet(); + freeSubscribeConnectionsCounter.release(); } public boolean freezeMaster(FreezeReason reason) { diff --git a/redisson/src/main/java/org/redisson/connection/ConnectionManager.java b/redisson/src/main/java/org/redisson/connection/ConnectionManager.java index 4332fa3f3..b3094aca4 100644 --- a/redisson/src/main/java/org/redisson/connection/ConnectionManager.java +++ b/redisson/src/main/java/org/redisson/connection/ConnectionManager.java @@ -16,7 +16,7 @@ package org.redisson.connection; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.Collection; import java.util.Set; import java.util.concurrent.ExecutorService; @@ -29,6 +29,7 @@ import org.redisson.client.RedisConnection; import org.redisson.client.RedisPubSubListener; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommand; +import org.redisson.command.CommandSyncService; import org.redisson.config.MasterSlaveServersConfig; import org.redisson.misc.InfinitySemaphoreLatch; import org.redisson.misc.RPromise; @@ -45,9 +46,11 @@ import io.netty.util.TimerTask; */ public interface ConnectionManager { + CommandSyncService getCommandExecutor(); + ExecutorService getExecutor(); - URI getLastClusterNode(); + URL getLastClusterNode(); boolean isClusterMode(); @@ -109,9 +112,9 @@ public interface ConnectionManager { Codec unsubscribe(String channelName, AsyncSemaphore lock); - Codec unsubscribe(String channelName); + RFuture unsubscribe(String channelName, boolean temporaryDown); - Codec punsubscribe(String channelName); + RFuture punsubscribe(String channelName, boolean temporaryDown); Codec punsubscribe(String channelName, AsyncSemaphore lock); diff --git a/redisson/src/main/java/org/redisson/connection/ElasticacheConnectionManager.java b/redisson/src/main/java/org/redisson/connection/ElasticacheConnectionManager.java index 615b6ffee..eb3434d2f 100644 --- a/redisson/src/main/java/org/redisson/connection/ElasticacheConnectionManager.java +++ b/redisson/src/main/java/org/redisson/connection/ElasticacheConnectionManager.java @@ -15,161 +15,15 @@ */ package org.redisson.connection; -import java.net.URI; -import java.util.HashMap; -import java.util.Map; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicReference; - -import org.redisson.client.RedisClient; -import org.redisson.client.RedisConnection; -import org.redisson.client.RedisConnectionException; -import org.redisson.client.RedisException; -import org.redisson.client.protocol.RedisCommands; -import org.redisson.config.BaseMasterSlaveServersConfig; import org.redisson.config.Config; import org.redisson.config.ElasticacheServersConfig; -import org.redisson.config.MasterSlaveServersConfig; -import org.redisson.misc.RPromise; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import io.netty.util.concurrent.GlobalEventExecutor; -import io.netty.util.concurrent.ScheduledFuture; - -/** - * {@link ConnectionManager} for AWS ElastiCache Replication Groups. By providing all nodes - * of the replication group to this manager, the role of each node can be polled to determine - * if a failover has occurred resulting in a new master. - * - * @author Steve Ungerer - */ -public class ElasticacheConnectionManager extends MasterSlaveConnectionManager { - - private static final String ROLE_KEY = "role:"; - - private final Logger log = LoggerFactory.getLogger(getClass()); - private AtomicReference currentMaster = new AtomicReference(); - - private final Map nodeConnections = new HashMap(); - - private ScheduledFuture monitorFuture; - - private enum Role { - master, - slave - } +@Deprecated +public class ElasticacheConnectionManager extends ReplicatedConnectionManager { public ElasticacheConnectionManager(ElasticacheServersConfig cfg, Config config) { - super(config); - - this.config = create(cfg); - - for (URI addr : cfg.getNodeAddresses()) { - RedisConnection connection = connect(cfg, addr); - if (connection == null) { - continue; - } - - Role role = determineRole(connection.sync(RedisCommands.INFO_REPLICATION)); - if (Role.master.equals(role)) { - if (currentMaster.get() != null) { - throw new RedisException("Multiple masters detected"); - } - currentMaster.set(addr); - log.info("{} is the master", addr); - this.config.setMasterAddress(addr); - } else { - log.info("{} is a slave", addr); - this.config.addSlaveAddress(addr); - } - } - - if (currentMaster.get() == null) { - throw new RedisConnectionException("Can't connect to servers!"); - } - - init(this.config); - - monitorRoleChange(cfg); - } - - @Override - protected MasterSlaveServersConfig create(BaseMasterSlaveServersConfig cfg) { - MasterSlaveServersConfig res = super.create(cfg); - res.setDatabase(((ElasticacheServersConfig)cfg).getDatabase()); - return res; + super(cfg, config); } - private RedisConnection connect(ElasticacheServersConfig cfg, URI addr) { - RedisConnection connection = nodeConnections.get(addr); - if (connection != null) { - return connection; - } - RedisClient client = createClient(addr.getHost(), addr.getPort(), cfg.getConnectTimeout(), cfg.getRetryInterval() * cfg.getRetryAttempts()); - try { - connection = client.connect(); - RPromise future = newPromise(); - connectListener.onConnect(future, connection, null, config); - future.syncUninterruptibly(); - nodeConnections.put(addr, connection); - } catch (RedisConnectionException e) { - log.warn(e.getMessage(), e); - } catch (Exception e) { - log.error(e.getMessage(), e); - } - return connection; - } - - private void monitorRoleChange(final ElasticacheServersConfig cfg) { - monitorFuture = GlobalEventExecutor.INSTANCE.scheduleWithFixedDelay(new Runnable() { - @Override - public void run() { - try { - URI master = currentMaster.get(); - log.debug("Current master: {}", master); - for (URI addr : cfg.getNodeAddresses()) { - RedisConnection connection = connect(cfg, addr); - String replInfo = connection.sync(RedisCommands.INFO_REPLICATION); - log.trace("{} repl info: {}", addr, replInfo); - - Role role = determineRole(replInfo); - log.debug("node {} is {}", addr, role); - - if (Role.master.equals(role) && master.equals(addr)) { - log.debug("Current master {} unchanged", master); - } else if (Role.master.equals(role) && !master.equals(addr) && currentMaster.compareAndSet(master, addr)) { - log.info("Master has changed from {} to {}", master, addr); - changeMaster(singleSlotRange.getStartSlot(), addr.getHost(), addr.getPort()); - break; - } - } - } catch (Exception e) { - log.error(e.getMessage(), e); - } - } - - }, cfg.getScanInterval(), cfg.getScanInterval(), TimeUnit.MILLISECONDS); - } - - private Role determineRole(String data) { - for (String s : data.split("\\r\\n")) { - if (s.startsWith(ROLE_KEY)) { - return Role.valueOf(s.substring(ROLE_KEY.length())); - } - } - throw new RedisException("Cannot determine node role from provided 'INFO replication' data"); - } - - @Override - public void shutdown() { - monitorFuture.cancel(true); - super.shutdown(); - - for (RedisConnection connection : nodeConnections.values()) { - connection.getRedisClient().shutdown(); - } - } } diff --git a/redisson/src/main/java/org/redisson/connection/IdleConnectionWatcher.java b/redisson/src/main/java/org/redisson/connection/IdleConnectionWatcher.java index c96852a2e..a9dab67c4 100644 --- a/redisson/src/main/java/org/redisson/connection/IdleConnectionWatcher.java +++ b/redisson/src/main/java/org/redisson/connection/IdleConnectionWatcher.java @@ -19,10 +19,10 @@ import java.util.Collection; import java.util.Queue; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicInteger; import org.redisson.client.RedisConnection; import org.redisson.config.MasterSlaveServersConfig; +import org.redisson.pubsub.AsyncSemaphore; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -38,10 +38,10 @@ public class IdleConnectionWatcher { private final int minimumAmount; private final int maximumAmount; - private final AtomicInteger freeConnectionsCounter; + private final AsyncSemaphore freeConnectionsCounter; private final Collection connections; - public Entry(int minimumAmount, int maximumAmount, Collection connections, AtomicInteger freeConnectionsCounter) { + public Entry(int minimumAmount, int maximumAmount, Collection connections, AsyncSemaphore freeConnectionsCounter) { super(); this.minimumAmount = minimumAmount; this.maximumAmount = maximumAmount; @@ -84,10 +84,10 @@ public class IdleConnectionWatcher { } private boolean validateAmount(Entry entry) { - return entry.maximumAmount - entry.freeConnectionsCounter.get() + entry.connections.size() > entry.minimumAmount; + return entry.maximumAmount - entry.freeConnectionsCounter.getCounter() + entry.connections.size() > entry.minimumAmount; } - public void add(int minimumAmount, int maximumAmount, Collection connections, AtomicInteger freeConnectionsCounter) { + public void add(int minimumAmount, int maximumAmount, Collection connections, AsyncSemaphore freeConnectionsCounter) { entries.add(new Entry(minimumAmount, maximumAmount, connections, freeConnectionsCounter)); } diff --git a/redisson/src/main/java/org/redisson/connection/MasterSlaveConnectionManager.java b/redisson/src/main/java/org/redisson/connection/MasterSlaveConnectionManager.java index a1bda8e09..1069f2885 100644 --- a/redisson/src/main/java/org/redisson/connection/MasterSlaveConnectionManager.java +++ b/redisson/src/main/java/org/redisson/connection/MasterSlaveConnectionManager.java @@ -15,8 +15,9 @@ */ package org.redisson.connection; +import java.lang.reflect.Field; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.Arrays; import java.util.Collection; import java.util.Collections; @@ -45,6 +46,7 @@ import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.pubsub.PubSubType; import org.redisson.cluster.ClusterSlotRange; +import org.redisson.command.CommandSyncService; import org.redisson.config.BaseMasterSlaveServersConfig; import org.redisson.config.Config; import org.redisson.config.MasterSlaveServersConfig; @@ -53,7 +55,6 @@ import org.redisson.connection.ClientConnectionsEntry.FreezeReason; import org.redisson.misc.InfinitySemaphoreLatch; import org.redisson.misc.RPromise; import org.redisson.misc.RedissonPromise; -import org.redisson.misc.RedissonThreadFactory; import org.redisson.pubsub.AsyncSemaphore; import org.redisson.pubsub.TransferListener; import org.slf4j.Logger; @@ -148,6 +149,12 @@ public class MasterSlaveConnectionManager implements ConnectionManager { private final AsyncSemaphore freePubSubLock = new AsyncSemaphore(1); + private final boolean sharedEventLoopGroup; + + private final boolean sharedExecutor; + + private final CommandSyncService commandExecutor; + { for (int i = 0; i < locks.length; i++) { locks[i] = new AsyncSemaphore(1); @@ -156,6 +163,7 @@ public class MasterSlaveConnectionManager implements ConnectionManager { public MasterSlaveConnectionManager(MasterSlaveServersConfig cfg, Config config) { this(config); + initTimer(cfg); init(cfg); } @@ -164,7 +172,7 @@ public class MasterSlaveConnectionManager implements ConnectionManager { if (cfg.isUseLinuxNativeEpoll()) { if (cfg.getEventLoopGroup() == null) { - this.group = new EpollEventLoopGroup(cfg.getNettyThreads()); + this.group = new EpollEventLoopGroup(cfg.getNettyThreads(), new DefaultThreadFactory("redisson-netty")); } else { this.group = cfg.getEventLoopGroup(); } @@ -198,11 +206,18 @@ public class MasterSlaveConnectionManager implements ConnectionManager { this.codec = cfg.getCodec(); this.shutdownPromise = newPromise(); + this.sharedEventLoopGroup = cfg.getEventLoopGroup() != null; + this.sharedExecutor = cfg.getExecutor() != null; + this.commandExecutor = new CommandSyncService(this); } public boolean isClusterMode() { return false; } + + public CommandSyncService getCommandExecutor() { + return commandExecutor; + } public IdleConnectionWatcher getConnectionWatcher() { return connectionWatcher; @@ -225,6 +240,17 @@ public class MasterSlaveConnectionManager implements ConnectionManager { protected void init(MasterSlaveServersConfig config) { this.config = config; + connectionWatcher = new IdleConnectionWatcher(this, config); + + try { + initEntry(config); + } catch (RuntimeException e) { + stopThreads(); + throw e; + } + } + + protected void initTimer(MasterSlaveServersConfig config) { int[] timeouts = new int[]{config.getRetryInterval(), config.getTimeout(), config.getReconnectionTimeout()}; Arrays.sort(timeouts); int minTimeout = timeouts[0]; @@ -235,16 +261,18 @@ public class MasterSlaveConnectionManager implements ConnectionManager { } else { minTimeout = 100; } - timer = new HashedWheelTimer(minTimeout, TimeUnit.MILLISECONDS); - - connectionWatcher = new IdleConnectionWatcher(this, config); - + + timer = new HashedWheelTimer(Executors.defaultThreadFactory(), minTimeout, TimeUnit.MILLISECONDS, 1024); + + // to avoid assertion error during timer.stop invocation try { - initEntry(config); - } catch (RuntimeException e) { - stopThreads(); - throw e; + Field leakField = HashedWheelTimer.class.getDeclaredField("leak"); + leakField.setAccessible(true); + leakField.set(timer, null); + } catch (Exception e) { + throw new IllegalStateException(e); } + } public ConnectionInitializer getConnectListener() { @@ -272,7 +300,7 @@ public class MasterSlaveConnectionManager implements ConnectionManager { protected MasterSlaveEntry createMasterSlaveEntry(MasterSlaveServersConfig config, HashSet slots) { MasterSlaveEntry entry = new MasterSlaveEntry(slots, this, config); - List> fs = entry.initSlaveBalancer(java.util.Collections.emptySet()); + List> fs = entry.initSlaveBalancer(java.util.Collections.emptySet()); for (RFuture future : fs) { future.syncUninterruptibly(); } @@ -310,12 +338,12 @@ public class MasterSlaveConnectionManager implements ConnectionManager { @Override public RedisClient createClient(NodeType type, String host, int port) { RedisClient client = createClient(host, port, config.getConnectTimeout(), config.getRetryInterval() * config.getRetryAttempts()); - clients.add(new RedisClientEntry(client, this, type)); + clients.add(new RedisClientEntry(client, commandExecutor, type)); return client; } public void shutdownAsync(RedisClient client) { - clients.remove(new RedisClientEntry(client, this, null)); + clients.remove(new RedisClientEntry(client, commandExecutor, null)); client.shutdownAsync(); } @@ -540,16 +568,32 @@ public class MasterSlaveConnectionManager implements ConnectionManager { } @Override - public Codec unsubscribe(String channelName) { + public RFuture unsubscribe(final String channelName, boolean temporaryDown) { final PubSubConnectionEntry entry = name2PubSubConnection.remove(channelName); if (entry == null) { return null; } + freePubSubConnections.remove(entry); - Codec entryCodec = entry.getConnection().getChannels().get(channelName); + final Codec entryCodec = entry.getConnection().getChannels().get(channelName); + if (temporaryDown) { + final RPromise result = newPromise(); + entry.unsubscribe(channelName, new BaseRedisPubSubListener() { + + @Override + public boolean onStatus(PubSubType type, String channel) { + if (type == PubSubType.UNSUBSCRIBE && channel.equals(channelName)) { + result.trySuccess(entryCodec); + return true; + } + return false; + } + + }); + return result; + } entry.unsubscribe(channelName, null); - - return entryCodec; + return newSucceededFuture(entryCodec); } public Codec punsubscribe(final String channelName, final AsyncSemaphore lock) { @@ -583,16 +627,32 @@ public class MasterSlaveConnectionManager implements ConnectionManager { @Override - public Codec punsubscribe(final String channelName) { + public RFuture punsubscribe(final String channelName, boolean temporaryDown) { final PubSubConnectionEntry entry = name2PubSubConnection.remove(channelName); if (entry == null) { return null; } + freePubSubConnections.remove(entry); - Codec entryCodec = entry.getConnection().getPatternChannels().get(channelName); + final Codec entryCodec = entry.getConnection().getChannels().get(channelName); + if (temporaryDown) { + final RPromise result = newPromise(); + entry.punsubscribe(channelName, new BaseRedisPubSubListener() { + + @Override + public boolean onStatus(PubSubType type, String channel) { + if (type == PubSubType.PUNSUBSCRIBE && channel.equals(channelName)) { + result.trySuccess(entryCodec); + return true; + } + return false; + } + + }); + return result; + } entry.punsubscribe(channelName, null); - - return entryCodec; + return newSucceededFuture(entryCodec); } @Override @@ -632,7 +692,7 @@ public class MasterSlaveConnectionManager implements ConnectionManager { if (entry == null) { entry = getEntry(source); } - return entry.connectionWriteOp(); + return entry.connectionWriteOp(command); } private MasterSlaveEntry getEntry(NodeSource source) { @@ -640,14 +700,14 @@ public class MasterSlaveConnectionManager implements ConnectionManager { if (source.getRedirect() != null) { MasterSlaveEntry e = getEntry(source.getAddr()); if (e == null) { - throw new RedisNodeNotFoundException("No node for slot: " + source.getAddr()); + throw new RedisNodeNotFoundException("Node: " + source.getAddr() + " for slot: " + source.getSlot() + " hasn't been discovered yet"); } return e; } MasterSlaveEntry e = getEntry(source.getSlot()); if (e == null) { - throw new RedisNodeNotFoundException("No node with slot: " + source.getSlot()); + throw new RedisNodeNotFoundException("Node: " + source.getAddr() + " for slot: " + source.getSlot() + " hasn't been discovered yet"); } return e; } @@ -659,9 +719,18 @@ public class MasterSlaveConnectionManager implements ConnectionManager { entry = getEntry(source.getSlot()); } if (source.getAddr() != null) { - return entry.connectionReadOp(source.getAddr()); + entry = getEntry(source.getAddr()); + if (entry == null) { + for (MasterSlaveEntry e : getEntrySet()) { + if (e.hasSlave(source.getAddr())) { + entry = e; + break; + } + } + } + return entry.connectionReadOp(command, source.getAddr()); } - return entry.connectionReadOp(); + return entry.connectionReadOp(command); } RFuture nextPubSubConnection(int slot) { @@ -704,15 +773,20 @@ public class MasterSlaveConnectionManager implements ConnectionManager { for (MasterSlaveEntry entry : entries.values()) { entry.shutdown(); } - timer.stop(); - executor.shutdown(); - try { - executor.awaitTermination(timeout, unit); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); + if (!sharedExecutor) { + executor.shutdown(); + try { + executor.awaitTermination(timeout, unit); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } } - group.shutdownGracefully(quietPeriod, timeout, unit).syncUninterruptibly(); + + if (!sharedEventLoopGroup) { + group.shutdownGracefully(quietPeriod, timeout, unit).syncUninterruptibly(); + } + timer.stop(); } @Override @@ -790,7 +864,7 @@ public class MasterSlaveConnectionManager implements ConnectionManager { return executor; } - public URI getLastClusterNode() { + public URL getLastClusterNode() { return null; } } diff --git a/redisson/src/main/java/org/redisson/connection/MasterSlaveEntry.java b/redisson/src/main/java/org/redisson/connection/MasterSlaveEntry.java index bad7ef82a..e8d940300 100644 --- a/redisson/src/main/java/org/redisson/connection/MasterSlaveEntry.java +++ b/redisson/src/main/java/org/redisson/connection/MasterSlaveEntry.java @@ -16,7 +16,7 @@ package org.redisson.connection; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.Collection; import java.util.HashSet; import java.util.LinkedList; @@ -32,12 +32,13 @@ import org.redisson.client.RedisPubSubConnection; import org.redisson.client.RedisPubSubListener; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.CommandData; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.client.protocol.RedisCommands; import org.redisson.cluster.ClusterSlotRange; import org.redisson.config.MasterSlaveServersConfig; import org.redisson.config.ReadMode; import org.redisson.connection.ClientConnectionsEntry.FreezeReason; import org.redisson.connection.balancer.LoadBalancerManager; -import org.redisson.connection.balancer.LoadBalancerManagerImpl; import org.redisson.connection.pool.MasterConnectionPool; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -76,11 +77,11 @@ public class MasterSlaveEntry { this.connectionManager = connectionManager; this.config = config; - slaveBalancer = new LoadBalancerManagerImpl(config, connectionManager, this); + slaveBalancer = new LoadBalancerManager(config, connectionManager, this); writeConnectionHolder = new MasterConnectionPool(config, connectionManager, this); } - public List> initSlaveBalancer(Collection disconnectedNodes) { + public List> initSlaveBalancer(Collection disconnectedNodes) { boolean freezeMasterAsSlave = !config.getSlaveAddresses().isEmpty() && config.getReadMode() == ReadMode.SLAVE && disconnectedNodes.size() < config.getSlaveAddresses().size(); @@ -88,7 +89,7 @@ public class MasterSlaveEntry { List> result = new LinkedList>(); RFuture f = addSlave(config.getMasterAddress().getHost(), config.getMasterAddress().getPort(), freezeMasterAsSlave, NodeType.MASTER); result.add(f); - for (URI address : config.getSlaveAddresses()) { + for (URL address : config.getSlaveAddresses()) { f = addSlave(address.getHost(), address.getPort(), disconnectedNodes.contains(address), NodeType.SLAVE); result.add(f); } @@ -108,7 +109,7 @@ public class MasterSlaveEntry { return false; } - return slaveDown(e); + return slaveDown(e, freezeReason == FreezeReason.SYSTEM); } public boolean slaveDown(String host, int port, FreezeReason freezeReason) { @@ -117,10 +118,10 @@ public class MasterSlaveEntry { return false; } - return slaveDown(entry); + return slaveDown(entry, freezeReason == FreezeReason.SYSTEM); } - private boolean slaveDown(ClientConnectionsEntry entry) { + private boolean slaveDown(ClientConnectionsEntry entry, boolean temporaryDown) { // add master as slave if no more slaves available if (config.getReadMode() == ReadMode.SLAVE && slaveBalancer.getAvailableClients() == 0) { InetSocketAddress addr = masterEntry.getClient().getAddr(); @@ -154,33 +155,45 @@ public class MasterSlaveEntry { } for (RedisPubSubConnection connection : entry.getAllSubscribeConnections()) { - reattachPubSub(connection); + reattachPubSub(connection, temporaryDown); } entry.getAllSubscribeConnections().clear(); return true; } - private void reattachPubSub(RedisPubSubConnection redisPubSubConnection) { + private void reattachPubSub(RedisPubSubConnection redisPubSubConnection, boolean temporaryDown) { for (String channelName : redisPubSubConnection.getChannels().keySet()) { PubSubConnectionEntry pubSubEntry = connectionManager.getPubSubEntry(channelName); Collection> listeners = pubSubEntry.getListeners(channelName); - reattachPubSubListeners(channelName, listeners); + reattachPubSubListeners(channelName, listeners, temporaryDown); } for (String channelName : redisPubSubConnection.getPatternChannels().keySet()) { PubSubConnectionEntry pubSubEntry = connectionManager.getPubSubEntry(channelName); Collection> listeners = pubSubEntry.getListeners(channelName); - reattachPatternPubSubListeners(channelName, listeners); + reattachPatternPubSubListeners(channelName, listeners, temporaryDown); } } - private void reattachPubSubListeners(final String channelName, final Collection> listeners) { - Codec subscribeCodec = connectionManager.unsubscribe(channelName); + private void reattachPubSubListeners(final String channelName, final Collection> listeners, boolean temporaryDown) { + RFuture subscribeCodec = connectionManager.unsubscribe(channelName, temporaryDown); if (listeners.isEmpty()) { return; } + subscribeCodec.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + Codec subscribeCodec = future.get(); + subscribe(channelName, listeners, subscribeCodec); + } + + }); + } + + private void subscribe(final String channelName, final Collection> listeners, + final Codec subscribeCodec) { RFuture subscribeFuture = connectionManager.subscribe(subscribeCodec, channelName, null); subscribeFuture.addListener(new FutureListener() { @@ -188,42 +201,54 @@ public class MasterSlaveEntry { public void operationComplete(Future future) throws Exception { if (!future.isSuccess()) { - log.error("Can't resubscribe topic channel: " + channelName); + subscribe(channelName, listeners, subscribeCodec); return; } PubSubConnectionEntry newEntry = future.getNow(); for (RedisPubSubListener redisPubSubListener : listeners) { newEntry.addListener(channelName, redisPubSubListener); } - log.debug("resubscribed listeners for '{}' channel", channelName); + log.debug("resubscribed listeners of '{}' channel to {}", channelName, newEntry.getConnection().getRedisClient()); } }); } - private void reattachPatternPubSubListeners(final String channelName, - final Collection> listeners) { - Codec subscribeCodec = connectionManager.punsubscribe(channelName); - if (!listeners.isEmpty()) { - RFuture future = connectionManager.psubscribe(channelName, subscribeCodec, null); - future.addListener(new FutureListener() { - @Override - public void operationComplete(Future future) - throws Exception { - if (!future.isSuccess()) { - log.error("Can't resubscribe topic channel: " + channelName); - return; - } - - PubSubConnectionEntry newEntry = future.getNow(); - for (RedisPubSubListener redisPubSubListener : listeners) { - newEntry.addListener(channelName, redisPubSubListener); - } - log.debug("resubscribed listeners for '{}' channel-pattern", channelName); - } - }); + private void reattachPatternPubSubListeners(final String channelName, final Collection> listeners, boolean temporaryDown) { + RFuture subscribeCodec = connectionManager.punsubscribe(channelName, temporaryDown); + if (listeners.isEmpty()) { + return; } + + subscribeCodec.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + Codec subscribeCodec = future.get(); + psubscribe(channelName, listeners, subscribeCodec); + } + }); } + private void psubscribe(final String channelName, final Collection> listeners, + final Codec subscribeCodec) { + RFuture subscribeFuture = connectionManager.psubscribe(channelName, subscribeCodec, null); + subscribeFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) + throws Exception { + if (!future.isSuccess()) { + psubscribe(channelName, listeners, subscribeCodec); + return; + } + + PubSubConnectionEntry newEntry = future.getNow(); + for (RedisPubSubListener redisPubSubListener : listeners) { + newEntry.addListener(channelName, redisPubSubListener); + } + log.debug("resubscribed listeners for '{}' channel-pattern", channelName); + } + }); + } + private void reattachBlockingQueue(RedisConnection connection) { final CommandData commandData = connection.getCurrentCommand(); @@ -232,7 +257,7 @@ public class MasterSlaveEntry { return; } - RFuture newConnection = connectionReadOp(); + RFuture newConnection = connectionReadOp(RedisCommands.BLPOP_VALUE); newConnection.addListener(new FutureListener() { @Override public void operationComplete(Future future) throws Exception { @@ -268,11 +293,15 @@ public class MasterSlaveEntry { } }); } + + public boolean hasSlave(InetSocketAddress addr) { + return slaveBalancer.contains(addr); + } public RFuture addSlave(String host, int port) { return addSlave(host, port, true, NodeType.SLAVE); } - + private RFuture addSlave(String host, int port, boolean freezed, NodeType mode) { RedisClient client = connectionManager.createClient(NodeType.SLAVE, host, port); ClientConnectionsEntry entry = new ClientConnectionsEntry(client, @@ -316,18 +345,23 @@ public class MasterSlaveEntry { * @param host of Redis * @param port of Redis */ - public void changeMaster(String host, int port) { - ClientConnectionsEntry oldMaster = masterEntry; - setupMasterEntry(host, port); - writeConnectionHolder.remove(oldMaster); - slaveDown(oldMaster, FreezeReason.MANAGER); - - // more than one slave available, so master can be removed from slaves - if (config.getReadMode() == ReadMode.SLAVE - && slaveBalancer.getAvailableClients() > 1) { - slaveDown(host, port, FreezeReason.SYSTEM); - } - connectionManager.shutdownAsync(oldMaster.getClient()); + public void changeMaster(final String host, final int port) { + final ClientConnectionsEntry oldMaster = masterEntry; + RFuture future = setupMasterEntry(host, port); + future.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + writeConnectionHolder.remove(oldMaster); + slaveDown(oldMaster, FreezeReason.MANAGER); + + // more than one slave available, so master can be removed from slaves + if (config.getReadMode() == ReadMode.SLAVE + && slaveBalancer.getAvailableClients() > 1) { + slaveDown(host, port, FreezeReason.SYSTEM); + } + connectionManager.shutdownAsync(oldMaster.getClient()); + } + }); } public boolean isFreezed() { @@ -359,16 +393,16 @@ public class MasterSlaveEntry { slaveBalancer.shutdownAsync(); } - public RFuture connectionWriteOp() { - return writeConnectionHolder.get(); + public RFuture connectionWriteOp(RedisCommand command) { + return writeConnectionHolder.get(command); } - public RFuture connectionReadOp() { - return slaveBalancer.nextConnection(); + public RFuture connectionReadOp(RedisCommand command) { + return slaveBalancer.nextConnection(command); } - public RFuture connectionReadOp(InetSocketAddress addr) { - return slaveBalancer.getConnection(addr); + public RFuture connectionReadOp(RedisCommand command, InetSocketAddress addr) { + return slaveBalancer.getConnection(command, addr); } RFuture nextPubSubConnection() { diff --git a/redisson/src/main/java/org/redisson/connection/PubSubConnectionEntry.java b/redisson/src/main/java/org/redisson/connection/PubSubConnectionEntry.java index 2a6a5ce7e..f953f3ac4 100644 --- a/redisson/src/main/java/org/redisson/connection/PubSubConnectionEntry.java +++ b/redisson/src/main/java/org/redisson/connection/PubSubConnectionEntry.java @@ -17,12 +17,16 @@ package org.redisson.connection; import java.util.Collection; import java.util.Collections; +import java.util.EventListener; import java.util.Queue; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.atomic.AtomicInteger; +import org.redisson.PubSubMessageListener; +import org.redisson.PubSubPatternMessageListener; +import org.redisson.api.listener.MessageListener; import org.redisson.client.BaseRedisPubSubListener; import org.redisson.client.RedisPubSubConnection; import org.redisson.client.RedisPubSubListener; @@ -34,8 +38,6 @@ import io.netty.util.concurrent.Future; public class PubSubConnectionEntry { - public enum Status {ACTIVE, INACTIVE} - private final AtomicInteger subscribedChannelsAmount; private final RedisPubSubConnection conn; @@ -90,7 +92,34 @@ public class PubSubConnectionEntry { conn.addListener(listener); } + public boolean removeAllListeners(String channelName) { + Queue> listeners = channelListeners.get(channelName); + for (RedisPubSubListener listener : listeners) { + removeListener(channelName, listener); + } + return !listeners.isEmpty(); + } + // TODO optimize + public boolean removeListener(String channelName, EventListener msgListener) { + Queue> listeners = channelListeners.get(channelName); + for (RedisPubSubListener listener : listeners) { + if (listener instanceof PubSubMessageListener) { + if (((PubSubMessageListener)listener).getListener() == msgListener) { + removeListener(channelName, listener); + return true; + } + } + if (listener instanceof PubSubPatternMessageListener) { + if (((PubSubPatternMessageListener)listener).getListener() == msgListener) { + removeListener(channelName, listener); + return true; + } + } + } + return false; + } + public boolean removeListener(String channelName, int listenerId) { Queue> listeners = channelListeners.get(channelName); for (RedisPubSubListener listener : listeners) { diff --git a/redisson/src/main/java/org/redisson/connection/RedisClientEntry.java b/redisson/src/main/java/org/redisson/connection/RedisClientEntry.java index 674c80d89..5d92c82ae 100644 --- a/redisson/src/main/java/org/redisson/connection/RedisClientEntry.java +++ b/redisson/src/main/java/org/redisson/connection/RedisClientEntry.java @@ -16,28 +16,31 @@ package org.redisson.connection; import java.net.InetSocketAddress; -import java.util.List; import java.util.Map; import org.redisson.api.ClusterNode; import org.redisson.api.NodeType; +import org.redisson.api.RFuture; import org.redisson.client.RedisClient; -import org.redisson.client.RedisConnection; -import org.redisson.client.RedisException; -import org.redisson.client.codec.LongCodec; +import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.RedisCommands; -import org.redisson.misc.RPromise; +import org.redisson.command.CommandSyncService; +/** + * + * @author Nikita Koksharov + * + */ public class RedisClientEntry implements ClusterNode { private final RedisClient client; - private final ConnectionManager manager; + private final CommandSyncService commandExecutor; private final NodeType type; - public RedisClientEntry(RedisClient client, ConnectionManager manager, NodeType type) { + public RedisClientEntry(RedisClient client, CommandSyncService commandExecutor, NodeType type) { super(); this.client = client; - this.manager = manager; + this.commandExecutor = commandExecutor; this.type = type; } @@ -55,27 +58,13 @@ public class RedisClientEntry implements ClusterNode { return client.getAddr(); } - private RedisConnection connect() { - RedisConnection c = client.connect(); - RPromise future = manager.newPromise(); - manager.getConnectListener().onConnect(future, c, null, manager.getConfig()); - future.syncUninterruptibly(); - return future.getNow(); + public RFuture pingAsync() { + return commandExecutor.readAsync(client.getAddr(), (String)null, null, RedisCommands.PING_BOOL); } - + @Override public boolean ping() { - RedisConnection c = null; - try { - c = connect(); - return "PONG".equals(c.sync(RedisCommands.PING)); - } catch (Exception e) { - return false; - } finally { - if (c != null) { - c.closeAsync(); - } - } + return commandExecutor.get(pingAsync()); } @Override @@ -103,34 +92,64 @@ public class RedisClientEntry implements ClusterNode { return true; } + @Override + public RFuture timeAsync() { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.TIME); + } + + @Override public long time() { - RedisConnection c = null; - try { - c = connect(); - List parts = c.sync(RedisCommands.TIME); - return Long.valueOf(parts.get(0)); - } catch (Exception e) { - throw new RedisException(e.getMessage(), e); - } finally { - if (c != null) { - c.closeAsync(); - } + return commandExecutor.get(timeAsync()); + } + + @Override + public RFuture> clusterInfoAsync() { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.CLUSTER_INFO); + } + + @Override + public Map clusterInfo() { + return commandExecutor.get(clusterInfoAsync()); + } + + @Override + public Map info(InfoSection section) { + return commandExecutor.get(infoAsync(section)); + } + + @Override + public RFuture> infoAsync(InfoSection section) { + if (section == InfoSection.ALL) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_ALL); + } else if (section == InfoSection.DEFAULT) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_DEFAULT); + } else if (section == InfoSection.SERVER) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_SERVER); + } else if (section == InfoSection.CLIENTS) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_CLIENTS); + } else if (section == InfoSection.MEMORY) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_MEMORY); + } else if (section == InfoSection.PERSISTENCE) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_PERSISTENCE); + } else if (section == InfoSection.STATS) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_STATS); + } else if (section == InfoSection.REPLICATION) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_REPLICATION); + } else if (section == InfoSection.CPU) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_CPU); + } else if (section == InfoSection.COMMANDSTATS) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_COMMANDSTATS); + } else if (section == InfoSection.CLUSTER) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_CLUSTER); + } else if (section == InfoSection.KEYSPACE) { + return commandExecutor.readAsync(client.getAddr(), (String)null, StringCodec.INSTANCE, RedisCommands.INFO_KEYSPACE); } + throw new IllegalStateException(); } @Override public Map info() { - RedisConnection c = null; - try { - c = connect(); - return c.sync(RedisCommands.CLUSTER_INFO); - } catch (Exception e) { - return null; - } finally { - if (c != null) { - c.closeAsync(); - } - } + return clusterInfo(); } } diff --git a/redisson/src/main/java/org/redisson/connection/ReplicatedConnectionManager.java b/redisson/src/main/java/org/redisson/connection/ReplicatedConnectionManager.java new file mode 100644 index 000000000..ca776ad4d --- /dev/null +++ b/redisson/src/main/java/org/redisson/connection/ReplicatedConnectionManager.java @@ -0,0 +1,233 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.connection; + +import java.net.URL; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; + +import org.redisson.api.RFuture; +import org.redisson.client.RedisClient; +import org.redisson.client.RedisConnection; +import org.redisson.client.RedisConnectionException; +import org.redisson.client.RedisException; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.config.BaseMasterSlaveServersConfig; +import org.redisson.config.Config; +import org.redisson.config.MasterSlaveServersConfig; +import org.redisson.config.ReplicatedServersConfig; +import org.redisson.misc.RPromise; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.FutureListener; +import io.netty.util.concurrent.GlobalEventExecutor; +import io.netty.util.concurrent.ScheduledFuture; + +/** + * {@link ConnectionManager} for AWS ElastiCache Replication Groups or Azure Redis Cache. By providing all nodes + * of the replication group to this manager, the role of each node can be polled to determine + * if a failover has occurred resulting in a new master. + * + * @author Nikita Koksharov + * @author Steve Ungerer + */ +public class ReplicatedConnectionManager extends MasterSlaveConnectionManager { + + private static final String ROLE_KEY = "role"; + + private final Logger log = LoggerFactory.getLogger(getClass()); + + private AtomicReference currentMaster = new AtomicReference(); + + private final Map nodeConnections = new HashMap(); + + private ScheduledFuture monitorFuture; + + private enum Role { + master, + slave + } + + public ReplicatedConnectionManager(ReplicatedServersConfig cfg, Config config) { + super(config); + + this.config = create(cfg); + initTimer(this.config); + + for (URL addr : cfg.getNodeAddresses()) { + RFuture connectionFuture = connect(cfg, addr); + connectionFuture.awaitUninterruptibly(); + RedisConnection connection = connectionFuture.getNow(); + if (connection == null) { + continue; + } + + Role role = Role.valueOf(connection.sync(RedisCommands.INFO_REPLICATION).get(ROLE_KEY)); + if (Role.master.equals(role)) { + if (currentMaster.get() != null) { + throw new RedisException("Multiple masters detected"); + } + currentMaster.set(addr); + log.info("{} is the master", addr); + this.config.setMasterAddress(addr); + } else { + log.info("{} is a slave", addr); + this.config.addSlaveAddress(addr); + } + } + + if (currentMaster.get() == null) { + throw new RedisConnectionException("Can't connect to servers!"); + } + + init(this.config); + + scheduleMasterChangeCheck(cfg); + } + + @Override + protected MasterSlaveServersConfig create(BaseMasterSlaveServersConfig cfg) { + MasterSlaveServersConfig res = super.create(cfg); + res.setDatabase(((ReplicatedServersConfig)cfg).getDatabase()); + return res; + } + + private RFuture connect(BaseMasterSlaveServersConfig cfg, final URL addr) { + RedisConnection connection = nodeConnections.get(addr); + if (connection != null) { + return newSucceededFuture(connection); + } + + RedisClient client = createClient(addr.getHost(), addr.getPort(), cfg.getConnectTimeout(), cfg.getRetryInterval() * cfg.getRetryAttempts()); + final RPromise result = newPromise(); + RFuture future = client.connectAsync(); + future.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + result.tryFailure(future.cause()); + return; + } + + RedisConnection connection = future.getNow(); + RPromise promise = newPromise(); + connectListener.onConnect(promise, connection, null, config); + promise.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + result.tryFailure(future.cause()); + return; + } + + RedisConnection connection = future.getNow(); + if (connection.isActive()) { + nodeConnections.put(addr, connection); + result.trySuccess(connection); + } else { + connection.closeAsync(); + result.tryFailure(new RedisException("Connection to " + connection.getRedisClient().getAddr() + " is not active!")); + } + } + }); + } + }); + + return result; + } + + private void scheduleMasterChangeCheck(final ReplicatedServersConfig cfg) { + monitorFuture = GlobalEventExecutor.INSTANCE.schedule(new Runnable() { + @Override + public void run() { + final URL master = currentMaster.get(); + log.debug("Current master: {}", master); + + final AtomicInteger count = new AtomicInteger(cfg.getNodeAddresses().size()); + for (final URL addr : cfg.getNodeAddresses()) { + if (isShuttingDown()) { + return; + } + + RFuture connectionFuture = connect(cfg, addr); + connectionFuture.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + log.error(future.cause().getMessage(), future.cause()); + if (count.decrementAndGet() == 0) { + scheduleMasterChangeCheck(cfg); + } + return; + } + + if (isShuttingDown()) { + return; + } + + RedisConnection connection = future.getNow(); + RFuture> result = connection.async(RedisCommands.INFO_REPLICATION); + result.addListener(new FutureListener>() { + @Override + public void operationComplete(Future> future) + throws Exception { + if (!future.isSuccess()) { + log.error(future.cause().getMessage(), future.cause()); + if (count.decrementAndGet() == 0) { + scheduleMasterChangeCheck(cfg); + } + return; + } + + Role role = Role.valueOf(future.getNow().get(ROLE_KEY)); + if (Role.master.equals(role)) { + if (master.equals(addr)) { + log.debug("Current master {} unchanged", master); + } else if (currentMaster.compareAndSet(master, addr)) { + log.info("Master has changed from {} to {}", master, addr); + changeMaster(singleSlotRange.getStartSlot(), addr.getHost(), addr.getPort()); + } + } + + if (count.decrementAndGet() == 0) { + scheduleMasterChangeCheck(cfg); + } + } + }); + } + }); + } + } + + }, cfg.getScanInterval(), TimeUnit.MILLISECONDS); + } + + @Override + public void shutdown() { + monitorFuture.cancel(true); + super.shutdown(); + + for (RedisConnection connection : nodeConnections.values()) { + connection.getRedisClient().shutdown(); + } + } +} + diff --git a/redisson/src/main/java/org/redisson/connection/SentinelConnectionManager.java b/redisson/src/main/java/org/redisson/connection/SentinelConnectionManager.java index 227ab8a82..6974246d2 100755 --- a/redisson/src/main/java/org/redisson/connection/SentinelConnectionManager.java +++ b/redisson/src/main/java/org/redisson/connection/SentinelConnectionManager.java @@ -16,7 +16,7 @@ package org.redisson.connection; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.ArrayList; import java.util.HashSet; import java.util.List; @@ -41,7 +41,7 @@ import org.redisson.config.MasterSlaveServersConfig; import org.redisson.config.ReadMode; import org.redisson.config.SentinelServersConfig; import org.redisson.connection.ClientConnectionsEntry.FreezeReason; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -49,6 +49,11 @@ import io.netty.util.concurrent.Future; import io.netty.util.concurrent.FutureListener; import io.netty.util.internal.PlatformDependent; +/** + * + * @author Nikita Koksharov + * + */ public class SentinelConnectionManager extends MasterSlaveConnectionManager { private final Logger log = LoggerFactory.getLogger(getClass()); @@ -57,14 +62,15 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { private final AtomicReference currentMaster = new AtomicReference(); private final ConcurrentMap slaves = PlatformDependent.newConcurrentHashMap(); - private final Set disconnectedSlaves = new HashSet(); + private final Set disconnectedSlaves = new HashSet(); public SentinelConnectionManager(SentinelServersConfig cfg, Config config) { super(config); final MasterSlaveServersConfig c = create(cfg); + initTimer(c); - for (URI addr : cfg.getSentinelAddresses()) { + for (URL addr : cfg.getSentinelAddresses()) { RedisClient client = createClient(addr.getHost(), addr.getPort(), c.getConnectTimeout(), c.getRetryInterval() * c.getRetryAttempts()); try { RedisConnection connection = client.connect(); @@ -98,7 +104,7 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { log.info("slave: {} added", host); if (flags.contains("s_down") || flags.contains("disconnected")) { - URI url = URIBuilder.create(host); + URL url = URLBuilder.create(host); disconnectedSlaves.add(url); log.warn("slave: {} is down", host); } @@ -117,7 +123,7 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { init(c); List> connectionFutures = new ArrayList>(cfg.getSentinelAddresses().size()); - for (URI addr : cfg.getSentinelAddresses()) { + for (URL addr : cfg.getSentinelAddresses()) { RFuture future = registerSentinel(cfg, addr, c); connectionFutures.add(future); } @@ -140,7 +146,7 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { return entry; } - private RFuture registerSentinel(final SentinelServersConfig cfg, final URI addr, final MasterSlaveServersConfig c) { + private RFuture registerSentinel(final SentinelServersConfig cfg, final URL addr, final MasterSlaveServersConfig c) { RedisClient client = createClient(addr.getHost(), addr.getPort(), c.getConnectTimeout(), c.getRetryInterval() * c.getRetryAttempts()); RedisClient oldClient = sentinels.putIfAbsent(addr.getHost() + ":" + addr.getPort(), client); if (oldClient != null) { @@ -202,12 +208,12 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { String port = parts[3]; String addr = ip + ":" + port; - URI uri = URIBuilder.create(addr); + URL uri = URLBuilder.create(addr); registerSentinel(cfg, uri, c); } } - protected void onSlaveAdded(URI addr, String msg) { + protected void onSlaveAdded(URL addr, String msg) { String[] parts = msg.split(" "); if (parts.length > 4 @@ -217,9 +223,14 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { final String slaveAddr = ip + ":" + port; + if (!isUseSameMaster(parts)) { + return; + } + // to avoid addition twice - if (slaves.putIfAbsent(slaveAddr, true) == null && config.getReadMode() != ReadMode.MASTER) { - RFuture future = getEntry(singleSlotRange.getStartSlot()).addSlave(ip, Integer.valueOf(port)); + if (slaves.putIfAbsent(slaveAddr, true) == null) { + final MasterSlaveEntry entry = getEntry(singleSlotRange.getStartSlot()); + RFuture future = entry.addSlave(ip, Integer.valueOf(port)); future.addListener(new FutureListener() { @Override public void operationComplete(Future future) throws Exception { @@ -229,7 +240,7 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { return; } - if (getEntry(singleSlotRange.getStartSlot()).slaveUp(ip, Integer.valueOf(port), FreezeReason.MANAGER)) { + if (entry.slaveUp(ip, Integer.valueOf(port), FreezeReason.MANAGER)) { String slaveAddr = ip + ":" + port; log.info("slave: {} added", slaveAddr); } @@ -243,7 +254,7 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { } } - private void onNodeDown(URI sentinelAddr, String msg) { + private void onNodeDown(URL sentinelAddr, String msg) { String[] parts = msg.split(" "); if (parts.length > 3) { @@ -266,12 +277,14 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { String ip = parts[2]; String port = parts[3]; - MasterSlaveEntry entry = getEntry(singleSlotRange.getStartSlot()); - if (entry.getFreezeReason() != FreezeReason.MANAGER) { - entry.freeze(); - String addr = ip + ":" + port; - log.warn("master: {} has down", addr); - } +// should be resolved by master switch event +// +// MasterSlaveEntry entry = getEntry(singleSlotRange.getStartSlot()); +// if (entry.getFreezeReason() != FreezeReason.MANAGER) { +// entry.freeze(); +// String addr = ip + ":" + port; +// log.warn("master: {} has down", addr); +// } } } else { log.warn("onSlaveDown. Invalid message: {} from Sentinel {}:{}", msg, sentinelAddr.getHost(), sentinelAddr.getPort()); @@ -289,7 +302,22 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { } } - private void onNodeUp(URI addr, String msg) { + private boolean isUseSameMaster(String[] parts) { + String ip = parts[2]; + String port = parts[3]; + + String slaveAddr = ip + ":" + port; + + String master = currentMaster.get(); + String slaveMaster = parts[6] + ":" + parts[7]; + if (!master.equals(slaveMaster)) { + log.warn("Skipped slave up {} for master {} differs from current {}", slaveAddr, slaveMaster, master); + return false; + } + return true; + } + + private void onNodeUp(URL addr, String msg) { String[] parts = msg.split(" "); if (parts.length > 3) { @@ -297,6 +325,10 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { String ip = parts[2]; String port = parts[3]; + if (!isUseSameMaster(parts)) { + return; + } + slaveUp(ip, port); } else if ("master".equals(parts[0])) { String ip = parts[2]; @@ -328,7 +360,7 @@ public class SentinelConnectionManager extends MasterSlaveConnectionManager { } } - private void onMasterChange(SentinelServersConfig cfg, URI addr, String msg) { + private void onMasterChange(SentinelServersConfig cfg, URL addr, String msg) { String[] parts = msg.split(" "); if (parts.length > 3) { diff --git a/redisson/src/main/java/org/redisson/connection/SingleConnectionManager.java b/redisson/src/main/java/org/redisson/connection/SingleConnectionManager.java index 00f80b2aa..8cb088bf3 100644 --- a/redisson/src/main/java/org/redisson/connection/SingleConnectionManager.java +++ b/redisson/src/main/java/org/redisson/connection/SingleConnectionManager.java @@ -31,6 +31,11 @@ import org.slf4j.LoggerFactory; import io.netty.util.concurrent.GlobalEventExecutor; import io.netty.util.concurrent.ScheduledFuture; +/** + * + * @author Nikita Koksharov + * + */ public class SingleConnectionManager extends MasterSlaveConnectionManager { private final Logger log = LoggerFactory.getLogger(getClass()); diff --git a/redisson/src/main/java/org/redisson/connection/SingleEntry.java b/redisson/src/main/java/org/redisson/connection/SingleEntry.java index 6ccf53ae5..c5b0c7a5a 100644 --- a/redisson/src/main/java/org/redisson/connection/SingleEntry.java +++ b/redisson/src/main/java/org/redisson/connection/SingleEntry.java @@ -24,6 +24,7 @@ import org.redisson.api.RFuture; import org.redisson.client.RedisClient; import org.redisson.client.RedisConnection; import org.redisson.client.RedisPubSubConnection; +import org.redisson.client.protocol.RedisCommand; import org.redisson.cluster.ClusterSlotRange; import org.redisson.config.MasterSlaveServersConfig; import org.redisson.connection.pool.PubSubConnectionPool; @@ -82,13 +83,13 @@ public class SingleEntry extends MasterSlaveEntry { } @Override - public RFuture connectionReadOp(InetSocketAddress addr) { - return super.connectionWriteOp(); + public RFuture connectionReadOp(RedisCommand command, InetSocketAddress addr) { + return super.connectionWriteOp(command); } @Override - public RFuture connectionReadOp() { - return super.connectionWriteOp(); + public RFuture connectionReadOp(RedisCommand command) { + return super.connectionWriteOp(command); } @Override diff --git a/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManager.java b/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManager.java index e80ed8cad..40e46a5c8 100644 --- a/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManager.java +++ b/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManager.java @@ -16,37 +16,169 @@ package org.redisson.connection.balancer; import java.net.InetSocketAddress; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; import org.redisson.api.RFuture; import org.redisson.client.RedisConnection; +import org.redisson.client.RedisConnectionException; import org.redisson.client.RedisPubSubConnection; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.config.MasterSlaveServersConfig; import org.redisson.connection.ClientConnectionsEntry; import org.redisson.connection.ClientConnectionsEntry.FreezeReason; +import org.redisson.connection.ConnectionManager; +import org.redisson.connection.MasterSlaveEntry; +import org.redisson.connection.pool.PubSubConnectionPool; +import org.redisson.connection.pool.SlaveConnectionPool; +import org.redisson.misc.RPromise; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; -public interface LoadBalancerManager { +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.FutureListener; +import io.netty.util.internal.PlatformDependent; - RFuture getConnection(InetSocketAddress addr); +public class LoadBalancerManager { - int getAvailableClients(); + private final Logger log = LoggerFactory.getLogger(getClass()); - void shutdownAsync(); + private final ConnectionManager connectionManager; + private final Map addr2Entry = PlatformDependent.newConcurrentHashMap(); + private final PubSubConnectionPool pubSubConnectionPool; + private final SlaveConnectionPool slaveConnectionPool; - void shutdown(); + public LoadBalancerManager(MasterSlaveServersConfig config, ConnectionManager connectionManager, MasterSlaveEntry entry) { + this.connectionManager = connectionManager; + slaveConnectionPool = new SlaveConnectionPool(config, connectionManager, entry); + pubSubConnectionPool = new PubSubConnectionPool(config, connectionManager, entry); + } - boolean unfreeze(String host, int port, FreezeReason freezeReason); + public RFuture add(final ClientConnectionsEntry entry) { + final RPromise result = connectionManager.newPromise(); + FutureListener listener = new FutureListener() { + AtomicInteger counter = new AtomicInteger(2); + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + result.tryFailure(future.cause()); + return; + } + if (counter.decrementAndGet() == 0) { + addr2Entry.put(entry.getClient().getAddr(), entry); + result.trySuccess(null); + } + } + }; - ClientConnectionsEntry freeze(ClientConnectionsEntry connectionEntry, FreezeReason freezeReason); + RFuture slaveFuture = slaveConnectionPool.add(entry); + slaveFuture.addListener(listener); + RFuture pubSubFuture = pubSubConnectionPool.add(entry); + pubSubFuture.addListener(listener); + return result; + } + + public int getAvailableClients() { + int count = 0; + for (ClientConnectionsEntry connectionEntry : addr2Entry.values()) { + if (!connectionEntry.isFreezed()) { + count++; + } + } + return count; + } + + public boolean unfreeze(String host, int port, FreezeReason freezeReason) { + InetSocketAddress addr = new InetSocketAddress(host, port); + ClientConnectionsEntry entry = addr2Entry.get(addr); + if (entry == null) { + throw new IllegalStateException("Can't find " + addr + " in slaves!"); + } + + synchronized (entry) { + if (!entry.isFreezed()) { + return false; + } + if ((freezeReason == FreezeReason.RECONNECT + && entry.getFreezeReason() == FreezeReason.RECONNECT) + || freezeReason != FreezeReason.RECONNECT) { + entry.resetFailedAttempts(); + entry.setFreezed(false); + entry.setFreezeReason(null); + return true; + } + } + return false; + } + + public ClientConnectionsEntry freeze(String host, int port, FreezeReason freezeReason) { + InetSocketAddress addr = new InetSocketAddress(host, port); + ClientConnectionsEntry connectionEntry = addr2Entry.get(addr); + return freeze(connectionEntry, freezeReason); + } + + public ClientConnectionsEntry freeze(ClientConnectionsEntry connectionEntry, FreezeReason freezeReason) { + if (connectionEntry == null) { + return null; + } + + synchronized (connectionEntry) { + // only RECONNECT freeze reason could be replaced + if (connectionEntry.getFreezeReason() == null + || connectionEntry.getFreezeReason() == FreezeReason.RECONNECT) { + connectionEntry.setFreezed(true); + connectionEntry.setFreezeReason(freezeReason); + return connectionEntry; + } + if (connectionEntry.isFreezed()) { + return null; + } + } + + return connectionEntry; + } + + public RFuture nextPubSubConnection() { + return pubSubConnectionPool.get(); + } + + public boolean contains(InetSocketAddress addr) { + return addr2Entry.containsKey(addr); + } - ClientConnectionsEntry freeze(String host, int port, FreezeReason freezeReason); + public RFuture getConnection(RedisCommand command, InetSocketAddress addr) { + ClientConnectionsEntry entry = addr2Entry.get(addr); + if (entry != null) { + return slaveConnectionPool.get(command, entry); + } + RedisConnectionException exception = new RedisConnectionException("Can't find entry for " + addr); + return connectionManager.newFailedFuture(exception); + } - RFuture add(ClientConnectionsEntry entry); + public RFuture nextConnection(RedisCommand command) { + return slaveConnectionPool.get(command); + } - RFuture nextConnection(); + public void returnPubSubConnection(RedisPubSubConnection connection) { + ClientConnectionsEntry entry = addr2Entry.get(connection.getRedisClient().getAddr()); + pubSubConnectionPool.returnConnection(entry, connection); + } - RFuture nextPubSubConnection(); + public void returnConnection(RedisConnection connection) { + ClientConnectionsEntry entry = addr2Entry.get(connection.getRedisClient().getAddr()); + slaveConnectionPool.returnConnection(entry, connection); + } - void returnConnection(RedisConnection connection); + public void shutdown() { + for (ClientConnectionsEntry entry : addr2Entry.values()) { + entry.getClient().shutdown(); + } + } - void returnPubSubConnection(RedisPubSubConnection connection); + public void shutdownAsync() { + for (ClientConnectionsEntry entry : addr2Entry.values()) { + connectionManager.shutdownAsync(entry.getClient()); + } + } } diff --git a/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManagerImpl.java b/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManagerImpl.java deleted file mode 100644 index 1754f42f7..000000000 --- a/redisson/src/main/java/org/redisson/connection/balancer/LoadBalancerManagerImpl.java +++ /dev/null @@ -1,179 +0,0 @@ -/** - * Copyright 2016 Nikita Koksharov - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.redisson.connection.balancer; - -import java.net.InetSocketAddress; -import java.util.Map; -import java.util.concurrent.atomic.AtomicInteger; - -import org.redisson.api.RFuture; -import org.redisson.client.RedisConnection; -import org.redisson.client.RedisConnectionException; -import org.redisson.client.RedisPubSubConnection; -import org.redisson.config.MasterSlaveServersConfig; -import org.redisson.connection.ClientConnectionsEntry; -import org.redisson.connection.ClientConnectionsEntry.FreezeReason; -import org.redisson.connection.ConnectionManager; -import org.redisson.connection.MasterSlaveEntry; -import org.redisson.connection.pool.PubSubConnectionPool; -import org.redisson.connection.pool.SlaveConnectionPool; -import org.redisson.misc.RPromise; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import io.netty.util.concurrent.Future; -import io.netty.util.concurrent.FutureListener; -import io.netty.util.internal.PlatformDependent; - -public class LoadBalancerManagerImpl implements LoadBalancerManager { - - private final Logger log = LoggerFactory.getLogger(getClass()); - - private final ConnectionManager connectionManager; - private final Map addr2Entry = PlatformDependent.newConcurrentHashMap(); - private final PubSubConnectionPool pubSubConnectionPool; - private final SlaveConnectionPool slaveConnectionPool; - - public LoadBalancerManagerImpl(MasterSlaveServersConfig config, ConnectionManager connectionManager, MasterSlaveEntry entry) { - this.connectionManager = connectionManager; - slaveConnectionPool = new SlaveConnectionPool(config, connectionManager, entry); - pubSubConnectionPool = new PubSubConnectionPool(config, connectionManager, entry); - } - - public RFuture add(final ClientConnectionsEntry entry) { - final RPromise result = connectionManager.newPromise(); - FutureListener listener = new FutureListener() { - AtomicInteger counter = new AtomicInteger(2); - @Override - public void operationComplete(Future future) throws Exception { - if (!future.isSuccess()) { - result.tryFailure(future.cause()); - return; - } - if (counter.decrementAndGet() == 0) { - addr2Entry.put(entry.getClient().getAddr(), entry); - result.trySuccess(null); - } - } - }; - - RFuture slaveFuture = slaveConnectionPool.add(entry); - slaveFuture.addListener(listener); - RFuture pubSubFuture = pubSubConnectionPool.add(entry); - pubSubFuture.addListener(listener); - return result; - } - - public int getAvailableClients() { - int count = 0; - for (ClientConnectionsEntry connectionEntry : addr2Entry.values()) { - if (!connectionEntry.isFreezed()) { - count++; - } - } - return count; - } - - public boolean unfreeze(String host, int port, FreezeReason freezeReason) { - InetSocketAddress addr = new InetSocketAddress(host, port); - ClientConnectionsEntry entry = addr2Entry.get(addr); - if (entry == null) { - throw new IllegalStateException("Can't find " + addr + " in slaves!"); - } - - synchronized (entry) { - if (!entry.isFreezed()) { - return false; - } - if ((freezeReason == FreezeReason.RECONNECT - && entry.getFreezeReason() == FreezeReason.RECONNECT) - || freezeReason != FreezeReason.RECONNECT) { - entry.resetFailedAttempts(); - entry.setFreezed(false); - entry.setFreezeReason(null); - return true; - } - } - return false; - } - - public ClientConnectionsEntry freeze(String host, int port, FreezeReason freezeReason) { - InetSocketAddress addr = new InetSocketAddress(host, port); - ClientConnectionsEntry connectionEntry = addr2Entry.get(addr); - return freeze(connectionEntry, freezeReason); - } - - public ClientConnectionsEntry freeze(ClientConnectionsEntry connectionEntry, FreezeReason freezeReason) { - if (connectionEntry == null) { - return null; - } - - synchronized (connectionEntry) { - // only RECONNECT freeze reason could be replaced - if (connectionEntry.getFreezeReason() == null - || connectionEntry.getFreezeReason() == FreezeReason.RECONNECT) { - connectionEntry.setFreezed(true); - connectionEntry.setFreezeReason(freezeReason); - return connectionEntry; - } - if (connectionEntry.isFreezed()) { - return null; - } - } - - return connectionEntry; - } - - public RFuture nextPubSubConnection() { - return pubSubConnectionPool.get(); - } - - public RFuture getConnection(InetSocketAddress addr) { - ClientConnectionsEntry entry = addr2Entry.get(addr); - if (entry != null) { - return slaveConnectionPool.get(entry); - } - RedisConnectionException exception = new RedisConnectionException("Can't find entry for " + addr); - return connectionManager.newFailedFuture(exception); - } - - public RFuture nextConnection() { - return slaveConnectionPool.get(); - } - - public void returnPubSubConnection(RedisPubSubConnection connection) { - ClientConnectionsEntry entry = addr2Entry.get(connection.getRedisClient().getAddr()); - pubSubConnectionPool.returnConnection(entry, connection); - } - - public void returnConnection(RedisConnection connection) { - ClientConnectionsEntry entry = addr2Entry.get(connection.getRedisClient().getAddr()); - slaveConnectionPool.returnConnection(entry, connection); - } - - public void shutdown() { - for (ClientConnectionsEntry entry : addr2Entry.values()) { - entry.getClient().shutdown(); - } - } - - public void shutdownAsync() { - for (ClientConnectionsEntry entry : addr2Entry.values()) { - connectionManager.shutdownAsync(entry.getClient()); - } - } - -} diff --git a/redisson/src/main/java/org/redisson/connection/balancer/WeightedRoundRobinBalancer.java b/redisson/src/main/java/org/redisson/connection/balancer/WeightedRoundRobinBalancer.java index c5227783c..e5224013c 100644 --- a/redisson/src/main/java/org/redisson/connection/balancer/WeightedRoundRobinBalancer.java +++ b/redisson/src/main/java/org/redisson/connection/balancer/WeightedRoundRobinBalancer.java @@ -16,7 +16,7 @@ package org.redisson.connection.balancer; import java.net.InetSocketAddress; -import java.net.URI; +import java.net.URL; import java.util.ArrayList; import java.util.HashMap; import java.util.HashSet; @@ -28,7 +28,7 @@ import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; import org.redisson.connection.ClientConnectionsEntry; -import org.redisson.misc.URIBuilder; +import org.redisson.misc.URLBuilder; import io.netty.util.internal.PlatformDependent; @@ -78,7 +78,7 @@ public class WeightedRoundRobinBalancer implements LoadBalancer { */ public WeightedRoundRobinBalancer(Map weights, int defaultWeight) { for (Entry entry : weights.entrySet()) { - URI uri = URIBuilder.create(entry.getKey()); + URL uri = URLBuilder.create(entry.getKey()); InetSocketAddress addr = new InetSocketAddress(uri.getHost(), uri.getPort()); if (entry.getValue() <= 0) { throw new IllegalArgumentException("Weight can't be less than or equal zero"); diff --git a/redisson/src/main/java/org/redisson/connection/decoder/MapGetAllDecoder.java b/redisson/src/main/java/org/redisson/connection/decoder/MapGetAllDecoder.java index 4d03b692a..2ff9a99da 100644 --- a/redisson/src/main/java/org/redisson/connection/decoder/MapGetAllDecoder.java +++ b/redisson/src/main/java/org/redisson/connection/decoder/MapGetAllDecoder.java @@ -17,7 +17,7 @@ package org.redisson.connection.decoder; import java.io.IOException; import java.util.Collections; -import java.util.HashMap; +import java.util.LinkedHashMap; import java.util.List; import java.util.Map; @@ -30,10 +30,16 @@ public class MapGetAllDecoder implements MultiDecoder> { private final int shiftIndex; private final List args; + private final boolean allowNulls; public MapGetAllDecoder(List args, int shiftIndex) { + this(args, shiftIndex, false); + } + + public MapGetAllDecoder(List args, int shiftIndex, boolean allowNulls) { this.args = args; this.shiftIndex = shiftIndex; + this.allowNulls = allowNulls; } @Override @@ -51,10 +57,10 @@ public class MapGetAllDecoder implements MultiDecoder> { if (parts.isEmpty()) { return Collections.emptyMap(); } - Map result = new HashMap(parts.size()); + Map result = new LinkedHashMap(parts.size()); for (int index = 0; index < args.size()-shiftIndex; index++) { Object value = parts.get(index); - if (value == null) { + if (!allowNulls && value == null) { continue; } result.put(args.get(index+shiftIndex), value); diff --git a/redisson/src/main/java/org/redisson/connection/pool/ConnectionPool.java b/redisson/src/main/java/org/redisson/connection/pool/ConnectionPool.java index 6ac025101..695e8b086 100644 --- a/redisson/src/main/java/org/redisson/connection/pool/ConnectionPool.java +++ b/redisson/src/main/java/org/redisson/connection/pool/ConnectionPool.java @@ -20,12 +20,14 @@ import java.util.LinkedList; import java.util.List; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.redisson.api.NodeType; import org.redisson.api.RFuture; import org.redisson.client.RedisConnection; import org.redisson.client.RedisConnectionException; +import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommands; import org.redisson.config.MasterSlaveServersConfig; import org.redisson.connection.ClientConnectionsEntry; @@ -104,39 +106,52 @@ abstract class ConnectionPool { initPromise.tryFailure(cause); return; } - - RFuture promise = createConnection(entry); - promise.addListener(new FutureListener() { + + acquireConnection(entry, new Runnable() { + @Override - public void operationComplete(Future future) throws Exception { - if (future.isSuccess()) { - T conn = future.getNow(); - releaseConnection(entry, conn); - } + public void run() { + RPromise promise = connectionManager.newPromise(); + createConnection(entry, promise); + promise.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (future.isSuccess()) { + T conn = future.getNow(); - releaseConnection(entry); + releaseConnection(entry, conn); + } - if (!future.isSuccess()) { - Throwable cause = new RedisConnectionException( - "Can't init enough connections amount! Only " + (minimumIdleSize - initializedConnections.get()) + " from " + minimumIdleSize + " were initialized. Server: " - + entry.getClient().getAddr(), future.cause()); - initPromise.tryFailure(cause); - return; - } + releaseConnection(entry); - int value = initializedConnections.decrementAndGet(); - if (value == 0) { - log.info("{} connections initialized for {}", minimumIdleSize, entry.getClient().getAddr()); - if (!initPromise.trySuccess(null)) { - throw new IllegalStateException(); - } - } else if (value > 0 && !initPromise.isDone()) { - if (requests.incrementAndGet() <= minimumIdleSize) { - createConnection(checkFreezed, requests, entry, initPromise, minimumIdleSize, initializedConnections); + if (!future.isSuccess()) { + Throwable cause = new RedisConnectionException( + "Can't init enough connections amount! Only " + (minimumIdleSize - initializedConnections.get()) + " from " + minimumIdleSize + " were initialized. Server: " + + entry.getClient().getAddr(), future.cause()); + initPromise.tryFailure(cause); + return; + } + + int value = initializedConnections.decrementAndGet(); + if (value == 0) { + log.info("{} connections initialized for {}", minimumIdleSize, entry.getClient().getAddr()); + if (!initPromise.trySuccess(null)) { + throw new IllegalStateException(); + } + } else if (value > 0 && !initPromise.isDone()) { + if (requests.incrementAndGet() <= minimumIdleSize) { + createConnection(checkFreezed, requests, entry, initPromise, minimumIdleSize, initializedConnections); + } + } } - } + }); } }); + + } + + protected void acquireConnection(ClientConnectionsEntry entry, Runnable runnable) { + entry.acquireConnection(runnable); } protected abstract int getMinimumIdleSize(ClientConnectionsEntry entry); @@ -145,40 +160,41 @@ abstract class ConnectionPool { return config.getLoadBalancer().getEntry(entries); } - public RFuture get() { + public RFuture get(RedisCommand command) { for (int j = entries.size() - 1; j >= 0; j--) { - ClientConnectionsEntry entry = getEntry(); - if (!entry.isFreezed() && tryAcquireConnection(entry)) { - return connectTo(entry); + final ClientConnectionsEntry entry = getEntry(); + if (!entry.isFreezed() + && tryAcquireConnection(entry)) { + return acquireConnection(command, entry); } } - - List zeroConnectionsAmount = new LinkedList(); + + List failedAttempts = new LinkedList(); List freezed = new LinkedList(); for (ClientConnectionsEntry entry : entries) { if (entry.isFreezed()) { freezed.add(entry.getClient().getAddr()); } else { - zeroConnectionsAmount.add(entry.getClient().getAddr()); + failedAttempts.add(entry.getClient().getAddr()); } } - StringBuilder errorMsg = new StringBuilder(getClass().getSimpleName() + " exhausted! "); + StringBuilder errorMsg = new StringBuilder(getClass().getSimpleName() + " no available Redis entries. "); if (!freezed.isEmpty()) { errorMsg.append(" Disconnected hosts: " + freezed); } - if (!zeroConnectionsAmount.isEmpty()) { - errorMsg.append(" Hosts with fully busy connections: " + zeroConnectionsAmount); + if (!failedAttempts.isEmpty()) { + errorMsg.append(" Hosts disconnected due to `failedAttempts` limit reached: " + failedAttempts); } RedisConnectionException exception = new RedisConnectionException(errorMsg.toString()); return connectionManager.newFailedFuture(exception); } - public RFuture get(ClientConnectionsEntry entry) { + public RFuture get(RedisCommand command, ClientConnectionsEntry entry) { if (((entry.getNodeType() == NodeType.MASTER && entry.getFreezeReason() == FreezeReason.SYSTEM) || !entry.isFreezed()) && tryAcquireConnection(entry)) { - return connectTo(entry); + return acquireConnection(command, entry); } RedisConnectionException exception = new RedisConnectionException( @@ -186,8 +202,34 @@ abstract class ConnectionPool { return connectionManager.newFailedFuture(exception); } + public static abstract class AcquireCallback implements Runnable, FutureListener { + + } + + private RFuture acquireConnection(RedisCommand command, final ClientConnectionsEntry entry) { + final RPromise result = connectionManager.newPromise(); + + AcquireCallback callback = new AcquireCallback() { + @Override + public void run() { + result.removeListener(this); + connectTo(entry, result); + } + + @Override + public void operationComplete(Future future) throws Exception { + entry.removeConnection(this); + } + }; + + result.addListener(callback); + acquireConnection(entry, callback); + + return result; + } + protected boolean tryAcquireConnection(ClientConnectionsEntry entry) { - return entry.getFailedAttempts() < config.getFailedAttempts() && entry.tryAcquireConnection(); + return entry.getFailedAttempts() < config.getFailedAttempts(); } protected T poll(ClientConnectionsEntry entry) { @@ -198,21 +240,26 @@ abstract class ConnectionPool { return (RFuture) entry.connect(); } - private RFuture connectTo(ClientConnectionsEntry entry) { + private void connectTo(ClientConnectionsEntry entry, RPromise promise) { + if (promise.isDone()) { + releaseConnection(entry); + return; + } T conn = poll(entry); if (conn != null) { if (!conn.isActive()) { - return promiseFailure(entry, conn); + promiseFailure(entry, promise, conn); + return; } - return promiseSuccessful(entry, conn); + connectedSuccessful(entry, promise, conn); + return; } - return createConnection(entry); + createConnection(entry, promise); } - private RFuture createConnection(final ClientConnectionsEntry entry) { - final RPromise promise = connectionManager.newPromise(); + private void createConnection(final ClientConnectionsEntry entry, final RPromise promise) { RFuture connFuture = connect(entry); connFuture.addListener(new FutureListener() { @Override @@ -231,7 +278,6 @@ abstract class ConnectionPool { connectedSuccessful(entry, promise, conn); } }); - return promise; } private void connectedSuccessful(ClientConnectionsEntry entry, RPromise promise, T conn) { @@ -242,11 +288,6 @@ abstract class ConnectionPool { } } - private RFuture promiseSuccessful(ClientConnectionsEntry entry, T conn) { - entry.resetFailedAttempts(); - return (RFuture) conn.getAcquireFuture(); - } - private void promiseFailure(ClientConnectionsEntry entry, RPromise promise, Throwable cause) { if (entry.incFailedAttempts() == config.getFailedAttempts()) { checkForReconnect(entry); @@ -274,23 +315,6 @@ abstract class ConnectionPool { promise.tryFailure(cause); } - private RFuture promiseFailure(ClientConnectionsEntry entry, T conn) { - int attempts = entry.incFailedAttempts(); - if (attempts == config.getFailedAttempts()) { - conn.closeAsync(); - checkForReconnect(entry); - } else if (attempts < config.getFailedAttempts()) { - releaseConnection(entry, conn); - } else { - conn.closeAsync(); - } - - releaseConnection(entry); - - RedisConnectionException cause = new RedisConnectionException(conn + " is not active!"); - return connectionManager.newFailedFuture(cause); - } - private void checkForReconnect(ClientConnectionsEntry entry) { if (entry.getNodeType() == NodeType.SLAVE) { masterSlaveEntry.slaveDown(entry.getClient().getAddr().getHostName(), diff --git a/redisson/src/main/java/org/redisson/connection/pool/PubSubConnectionPool.java b/redisson/src/main/java/org/redisson/connection/pool/PubSubConnectionPool.java index 82ede0d4c..859623a9b 100644 --- a/redisson/src/main/java/org/redisson/connection/pool/PubSubConnectionPool.java +++ b/redisson/src/main/java/org/redisson/connection/pool/PubSubConnectionPool.java @@ -17,6 +17,8 @@ package org.redisson.connection.pool; import org.redisson.api.RFuture; import org.redisson.client.RedisPubSubConnection; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.client.protocol.RedisCommands; import org.redisson.config.MasterSlaveServersConfig; import org.redisson.connection.ClientConnectionsEntry; import org.redisson.connection.ConnectionManager; @@ -34,6 +36,10 @@ public class PubSubConnectionPool extends ConnectionPool super(config, connectionManager, masterSlaveEntry); } + public RFuture get() { + return get(RedisCommands.PUBLISH); + } + @Override protected RedisPubSubConnection poll(ClientConnectionsEntry entry) { return entry.pollSubscribeConnection(); @@ -50,10 +56,10 @@ public class PubSubConnectionPool extends ConnectionPool } @Override - protected boolean tryAcquireConnection(ClientConnectionsEntry entry) { - return entry.tryAcquireSubscribeConnection(); + protected void acquireConnection(ClientConnectionsEntry entry, Runnable runnable) { + entry.acquireSubscribeConnection(runnable); } - + @Override protected void releaseConnection(ClientConnectionsEntry entry) { entry.releaseSubscribeConnection(); diff --git a/redisson/src/main/java/org/redisson/eviction/EvictionScheduler.java b/redisson/src/main/java/org/redisson/eviction/EvictionScheduler.java new file mode 100644 index 000000000..d51cd8042 --- /dev/null +++ b/redisson/src/main/java/org/redisson/eviction/EvictionScheduler.java @@ -0,0 +1,74 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.eviction; + +import java.util.concurrent.ConcurrentMap; + +import org.redisson.command.CommandAsyncExecutor; + +import io.netty.util.internal.PlatformDependent; + +/** + * Eviction scheduler. + * Deletes expired entries in time interval between 5 seconds to 2 hours. + * It analyzes deleted amount of expired keys + * and 'tune' next execution delay depending on it. + * + * @author Nikita Koksharov + * + */ +public class EvictionScheduler { + + private final ConcurrentMap tasks = PlatformDependent.newConcurrentHashMap(); + private final CommandAsyncExecutor executor; + + public EvictionScheduler(CommandAsyncExecutor executor) { + this.executor = executor; + } + + public void scheduleCleanMultimap(String name, String timeoutSetName) { + EvictionTask task = new MultimapEvictionTask(name, timeoutSetName, executor); + EvictionTask prevTask = tasks.putIfAbsent(name, task); + if (prevTask == null) { + task.schedule(); + } + } + + public void scheduleJCache(String name, String timeoutSetName, String expiredChannelName) { + EvictionTask task = new JCacheEvictionTask(name, timeoutSetName, expiredChannelName, executor); + EvictionTask prevTask = tasks.putIfAbsent(name, task); + if (prevTask == null) { + task.schedule(); + } + } + + public void schedule(String name) { + EvictionTask task = new SetCacheEvictionTask(name, executor); + EvictionTask prevTask = tasks.putIfAbsent(name, task); + if (prevTask == null) { + task.schedule(); + } + } + + public void schedule(String name, String timeoutSetName, String maxIdleSetName) { + EvictionTask task = new MapCacheEvictionTask(name, timeoutSetName, maxIdleSetName, executor); + EvictionTask prevTask = tasks.putIfAbsent(name, task); + if (prevTask == null) { + task.schedule(); + } + } + +} diff --git a/redisson/src/main/java/org/redisson/eviction/EvictionTask.java b/redisson/src/main/java/org/redisson/eviction/EvictionTask.java new file mode 100644 index 000000000..346b1036c --- /dev/null +++ b/redisson/src/main/java/org/redisson/eviction/EvictionTask.java @@ -0,0 +1,98 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.eviction; + +import java.util.Deque; +import java.util.LinkedList; +import java.util.concurrent.TimeUnit; + +import org.redisson.api.RFuture; +import org.redisson.command.CommandAsyncExecutor; + +import io.netty.util.concurrent.Future; +import io.netty.util.concurrent.FutureListener; + +/** + * + * @author Nikita Koksharov + * + */ +abstract class EvictionTask implements Runnable { + + final Deque sizeHistory = new LinkedList(); + final int minDelay = 1; + final int maxDelay = 2*60*60; + final int keysLimit = 300; + + int delay = 10; + + final CommandAsyncExecutor executor; + + EvictionTask(CommandAsyncExecutor executor) { + super(); + this.executor = executor; + } + + public void schedule() { + executor.getConnectionManager().getGroup().schedule(this, delay, TimeUnit.SECONDS); + } + + abstract RFuture execute(); + + @Override + public void run() { + RFuture future = execute(); + future.addListener(new FutureListener() { + @Override + public void operationComplete(Future future) throws Exception { + if (!future.isSuccess()) { + schedule(); + return; + } + + Integer size = future.getNow(); + + if (sizeHistory.size() == 2) { + if (sizeHistory.peekFirst() > sizeHistory.peekLast() + && sizeHistory.peekLast() > size) { + delay = Math.min(maxDelay, (int)(delay*1.5)); + } + +// if (sizeHistory.peekFirst() < sizeHistory.peekLast() +// && sizeHistory.peekLast() < size) { +// prevDelay = Math.max(minDelay, prevDelay/2); +// } + + if (sizeHistory.peekFirst().intValue() == sizeHistory.peekLast() + && sizeHistory.peekLast().intValue() == size) { + if (size == keysLimit) { + delay = Math.max(minDelay, delay/4); + } + if (size == 0) { + delay = Math.min(maxDelay, (int)(delay*1.5)); + } + } + + sizeHistory.pollFirst(); + } + + sizeHistory.add(size); + schedule(); + } + }); + } + +} diff --git a/redisson/src/main/java/org/redisson/eviction/JCacheEvictionTask.java b/redisson/src/main/java/org/redisson/eviction/JCacheEvictionTask.java new file mode 100644 index 000000000..c79ea2fe2 --- /dev/null +++ b/redisson/src/main/java/org/redisson/eviction/JCacheEvictionTask.java @@ -0,0 +1,60 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.eviction; + +import java.util.Arrays; + +import org.redisson.api.RFuture; +import org.redisson.client.codec.LongCodec; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandAsyncExecutor; + +/** + * + * @author Nikita Koksharov + * + */ +public class JCacheEvictionTask extends EvictionTask { + + private final String name; + private final String timeoutSetName; + private final String expiredChannelName; + + public JCacheEvictionTask(String name, String timeoutSetName, String expiredChannelName, CommandAsyncExecutor executor) { + super(executor); + this.name = name; + this.timeoutSetName = timeoutSetName; + this.expiredChannelName = expiredChannelName; + } + + @Override + RFuture execute() { + return executor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_INTEGER, + "local expiredKeys = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " + + "for i, k in ipairs(expiredKeys) do " + + "local v = redis.call('hget', KEYS[1], k);" + + "local msg = struct.pack('Lc0Lc0', string.len(tostring(k)), tostring(k), string.len(tostring(v)), tostring(v));" + + "redis.call('publish', KEYS[3], msg);" + + "end; " + + "if #expiredKeys > 0 then " + + "redis.call('zrem', KEYS[2], unpack(expiredKeys)); " + + "redis.call('hdel', KEYS[1], unpack(expiredKeys)); " + + "end; " + + "return #expiredKeys;", + Arrays.asList(name, timeoutSetName, expiredChannelName), System.currentTimeMillis(), keysLimit); + } + +} diff --git a/redisson/src/main/java/org/redisson/eviction/MapCacheEvictionTask.java b/redisson/src/main/java/org/redisson/eviction/MapCacheEvictionTask.java new file mode 100644 index 000000000..9c6e0341f --- /dev/null +++ b/redisson/src/main/java/org/redisson/eviction/MapCacheEvictionTask.java @@ -0,0 +1,62 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.eviction; + +import java.util.Arrays; + +import org.redisson.api.RFuture; +import org.redisson.client.codec.LongCodec; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandAsyncExecutor; + +/** + * + * @author Nikita Koksharov + * + */ +public class MapCacheEvictionTask extends EvictionTask { + + private final String name; + private final String timeoutSetName; + private final String maxIdleSetName; + + public MapCacheEvictionTask(String name, String timeoutSetName, String maxIdleSetName, CommandAsyncExecutor executor) { + super(executor); + this.name = name; + this.timeoutSetName = timeoutSetName; + this.maxIdleSetName = maxIdleSetName; + } + + @Override + RFuture execute() { + return executor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_INTEGER, + "local expiredKeys1 = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " + + "if #expiredKeys1 > 0 then " + + "redis.call('zrem', KEYS[3], unpack(expiredKeys1)); " + + "redis.call('zrem', KEYS[2], unpack(expiredKeys1)); " + + "redis.call('hdel', KEYS[1], unpack(expiredKeys1)); " + + "end; " + + "local expiredKeys2 = redis.call('zrangebyscore', KEYS[3], 0, ARGV[1], 'limit', 0, ARGV[2]); " + + "if #expiredKeys2 > 0 then " + + "redis.call('zrem', KEYS[3], unpack(expiredKeys2)); " + + "redis.call('zrem', KEYS[2], unpack(expiredKeys2)); " + + "redis.call('hdel', KEYS[1], unpack(expiredKeys2)); " + + "end; " + + "return #expiredKeys1 + #expiredKeys2;", + Arrays.asList(name, timeoutSetName, maxIdleSetName), System.currentTimeMillis(), keysLimit); + } + +} diff --git a/redisson/src/main/java/org/redisson/eviction/MultimapEvictionTask.java b/redisson/src/main/java/org/redisson/eviction/MultimapEvictionTask.java new file mode 100644 index 000000000..dca62e65e --- /dev/null +++ b/redisson/src/main/java/org/redisson/eviction/MultimapEvictionTask.java @@ -0,0 +1,61 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.eviction; + +import java.util.Arrays; + +import org.redisson.api.RFuture; +import org.redisson.client.codec.LongCodec; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandAsyncExecutor; + +/** + * + * @author Nikita Koksharov + * + */ +public class MultimapEvictionTask extends EvictionTask { + + private final String name; + private final String timeoutSetName; + + public MultimapEvictionTask(String name, String timeoutSetName, CommandAsyncExecutor executor) { + super(executor); + this.name = name; + this.timeoutSetName = timeoutSetName; + } + + RFuture execute() { + return executor.evalWriteAsync(name, LongCodec.INSTANCE, RedisCommands.EVAL_INTEGER, + "local expiredKeys = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); " + + "if #expiredKeys > 0 then " + + "redis.call('zrem', KEYS[2], unpack(expiredKeys)); " + + + "local values = redis.call('hmget', KEYS[1], unpack(expiredKeys)); " + + "local keys = {}; " + + "for i, v in ipairs(values) do " + + "local name = '{' .. KEYS[1] .. '}:' .. v; " + + "table.insert(keys, name); " + + "end; " + + "redis.call('del', unpack(keys)); " + + + "redis.call('hdel', KEYS[1], unpack(expiredKeys)); " + + "end; " + + "return #expiredKeys;", + Arrays.asList(name, timeoutSetName), System.currentTimeMillis(), keysLimit); + } + +} diff --git a/redisson/src/main/java/org/redisson/eviction/SetCacheEvictionTask.java b/redisson/src/main/java/org/redisson/eviction/SetCacheEvictionTask.java new file mode 100644 index 000000000..826c879bc --- /dev/null +++ b/redisson/src/main/java/org/redisson/eviction/SetCacheEvictionTask.java @@ -0,0 +1,42 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.eviction; + +import org.redisson.api.RFuture; +import org.redisson.client.codec.LongCodec; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.command.CommandAsyncExecutor; + +/** + * + * @author Nikita Koksharov + * + */ +public class SetCacheEvictionTask extends EvictionTask { + + private final String name; + + public SetCacheEvictionTask(String name, CommandAsyncExecutor executor) { + super(executor); + this.name = name; + } + + @Override + RFuture execute() { + return executor.writeAsync(name, LongCodec.INSTANCE, RedisCommands.ZREMRANGEBYSCORE, name, 0, System.currentTimeMillis()); + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCache.java b/redisson/src/main/java/org/redisson/jcache/JCache.java new file mode 100644 index 000000000..e3d128c76 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCache.java @@ -0,0 +1,2441 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import java.net.InetSocketAddress; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.TimeUnit; + +import javax.cache.Cache; +import javax.cache.CacheManager; +import javax.cache.configuration.CacheEntryListenerConfiguration; +import javax.cache.configuration.Configuration; +import javax.cache.configuration.Factory; +import javax.cache.event.CacheEntryCreatedListener; +import javax.cache.event.CacheEntryEvent; +import javax.cache.event.CacheEntryEventFilter; +import javax.cache.event.CacheEntryExpiredListener; +import javax.cache.event.CacheEntryListener; +import javax.cache.event.CacheEntryRemovedListener; +import javax.cache.event.CacheEntryUpdatedListener; +import javax.cache.event.EventType; +import javax.cache.integration.CacheLoader; +import javax.cache.integration.CacheLoaderException; +import javax.cache.integration.CacheWriter; +import javax.cache.integration.CacheWriterException; +import javax.cache.integration.CompletionListener; +import javax.cache.processor.EntryProcessor; +import javax.cache.processor.EntryProcessorException; +import javax.cache.processor.EntryProcessorResult; + +import org.redisson.Redisson; +import org.redisson.RedissonBaseMapIterator; +import org.redisson.RedissonObject; +import org.redisson.api.RFuture; +import org.redisson.api.RLock; +import org.redisson.api.RSemaphore; +import org.redisson.api.RTopic; +import org.redisson.api.listener.MessageListener; +import org.redisson.client.codec.MapScanCodec; +import org.redisson.client.protocol.RedisCommand; +import org.redisson.client.protocol.RedisCommand.ValueType; +import org.redisson.client.protocol.RedisCommands; +import org.redisson.client.protocol.convertor.BooleanAmountReplayConvertor; +import org.redisson.client.protocol.convertor.BooleanReplayConvertor; +import org.redisson.client.protocol.convertor.EmptyConvertor; +import org.redisson.client.protocol.decoder.MapScanResult; +import org.redisson.client.protocol.decoder.ObjectListReplayDecoder; +import org.redisson.client.protocol.decoder.ScanObjectEntry; +import org.redisson.connection.decoder.MapGetAllDecoder; +import org.redisson.misc.Hash; + +import org.redisson.jcache.JMutableEntry.Action; +import org.redisson.jcache.configuration.JCacheConfiguration; + +import io.netty.util.internal.ThreadLocalRandom; + +/** + * JCache implementation + * + * @author Nikita Koksharov + * + * @param key + * @param value + */ +public class JCache extends RedissonObject implements Cache { + + private static final RedisCommand EVAL_GET_REPLACE = new RedisCommand("EVAL", 9, ValueType.MAP, ValueType.MAP_VALUE); + private static final RedisCommand EVAL_REPLACE_OLD_NEW_VALUE = new RedisCommand("EVAL", new EmptyConvertor(), 10, Arrays.asList(ValueType.MAP_KEY, ValueType.MAP_VALUE, ValueType.MAP_VALUE)); + private static final RedisCommand EVAL_REPLACE_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 9, ValueType.MAP); + private static final RedisCommand EVAL_GET_REMOVE_VALUE = new RedisCommand("EVAL", 7, ValueType.MAP_KEY, ValueType.MAP_VALUE); + private static final RedisCommand> EVAL_GET_REMOVE_VALUE_LIST = new RedisCommand>("EVAL", new ObjectListReplayDecoder(), 10, ValueType.OBJECT, ValueType.MAP_VALUE); + private static final RedisCommand EVAL_REMOVE_VALUES = new RedisCommand("EVAL", 5, ValueType.MAP_KEY); + private static final RedisCommand EVAL_REMOVE_VALUE = new RedisCommand("EVAL", new BooleanAmountReplayConvertor(), 7, ValueType.MAP_KEY); + private static final RedisCommand EVAL_GET_TTL = new RedisCommand("EVAL", 8, ValueType.MAP_KEY, ValueType.MAP_VALUE); + private static final RedisCommand EVAL_PUT = new RedisCommand("EVAL", new BooleanReplayConvertor(), 11, ValueType.MAP); + private static final RedisCommand EVAL_PUT_IF_ABSENT = new RedisCommand("EVAL", new BooleanReplayConvertor(), 7, ValueType.MAP); + private static final RedisCommand EVAL_REMOVE_KEY_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 8, ValueType.MAP); + private static final RedisCommand EVAL_CONTAINS_KEY = new RedisCommand("EVAL", new BooleanReplayConvertor(), 6, ValueType.MAP_KEY); + + private final JCacheManager cacheManager; + private final JCacheConfiguration config; + private final ConcurrentMap, Map> listeners = + new ConcurrentHashMap, Map>(); + private final Redisson redisson; + + private CacheLoader cacheLoader; + private CacheWriter cacheWriter; + private boolean closed; + private boolean hasOwnRedisson; + + public JCache(JCacheManager cacheManager, Redisson redisson, String name, JCacheConfiguration config, boolean hasOwnRedisson) { + super(redisson.getConfig().getCodec(), redisson.getCommandExecutor(), name); + + this.hasOwnRedisson = hasOwnRedisson; + this.redisson = redisson; + + Factory> cacheLoaderFactory = config.getCacheLoaderFactory(); + if (cacheLoaderFactory != null) { + cacheLoader = cacheLoaderFactory.create(); + } + Factory> cacheWriterFactory = config.getCacheWriterFactory(); + if (config.getCacheWriterFactory() != null) { + cacheWriter = (CacheWriter) cacheWriterFactory.create(); + } + + this.cacheManager = cacheManager; + this.config = config; + + redisson.getEvictionScheduler().scheduleJCache(getName(), getTimeoutSetName(), getExpiredChannelName()); + + for (CacheEntryListenerConfiguration listenerConfig : config.getCacheEntryListenerConfigurations()) { + registerCacheEntryListener(listenerConfig, false); + } + } + + private void checkNotClosed() { + if (closed) { + throw new IllegalStateException(); + } + } + + String getTimeoutSetName() { + return "jcache_timeout_set:{" + getName() + "}"; + } + + String getSyncName(Object syncId) { + return "jcache_sync:" + syncId + ":{" + getName() + "}"; + } + + String getCreatedSyncChannelName() { + return "jcache_created_sync_channel:{" + getName() + "}"; + } + + String getUpdatedSyncChannelName() { + return "jcache_updated_sync_channel:{" + getName() + "}"; + } + + String getRemovedSyncChannelName() { + return "jcache_removed_sync_channel:{" + getName() + "}"; + } + + String getCreatedChannelName() { + return "jcache_created_channel:{" + getName() + "}"; + } + + String getUpdatedChannelName() { + return "jcache_updated_channel:{" + getName() + "}"; + } + + String getExpiredChannelName() { + return "jcache_expired_channel:{" + getName() + "}"; + } + + String getRemovedChannelName() { + return "jcache_removed_channel:{" + getName() + "}"; + } + + private long currentNanoTime() { + if (config.isStatisticsEnabled()) { + return System.nanoTime(); + } + return 0; + } + + @Override + public V get(K key) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + long startTime = currentNanoTime(); + RLock lock = getLockedLock(key); + try { + V value = getValueLocked(key); + if (value == null) { + cacheManager.getStatBean(this).addMisses(1); + if (config.isReadThrough()) { + value = loadValue(key); + } + } else { + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addHits(1); + } + return value; + } finally { + lock.unlock(); + } + } + + V getValueLocked(K key) { + + V value = (V) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_GET_TTL, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return nil; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return nil; " + + "end; " + + "return value; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName()), + 0, System.currentTimeMillis(), key)); + + if (value != null) { + List result = new ArrayList(3); + result.add(value); + Long accessTimeout = getAccessTimeout(); + + double syncId = ThreadLocalRandom.current().nextDouble(); + Long syncs = (Long) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LONG, + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value), ARGV[4]); " + + "local syncs = redis.call('publish', KEYS[4], syncMsg); " + + "return syncs;" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "return 0;" + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), + getRemovedSyncChannelName()), + accessTimeout, System.currentTimeMillis(), encodeMapKey(key), syncId)); + + result.add(syncs); + result.add(syncId); + + waitSync(result); + return value; + } + + return value; + } + + private V getValue(K key) { + Long accessTimeout = getAccessTimeout(); + + V value = (V) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_GET_TTL, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return nil; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return nil; " + + "end; " + + + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "end; " + + + "return value; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName()), + accessTimeout, System.currentTimeMillis(), key)); + return value; + } + + private Long getAccessTimeout() { + if (config.getExpiryPolicy().getExpiryForAccess() == null) { + return -1L; + } + Long accessTimeout = config.getExpiryPolicy().getExpiryForAccess().getAdjustedTime(System.currentTimeMillis()); + + if (config.getExpiryPolicy().getExpiryForAccess().isZero()) { + accessTimeout = 0L; + } else if (accessTimeout.longValue() == Long.MAX_VALUE) { + accessTimeout = -1L; + } + return accessTimeout; + } + + V load(K key) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + V value = getValueLocked(key); + if (value == null) { + value = loadValue(key); + } + return value; + } finally { + lock.unlock(); + } + } + + private V loadValue(K key) { + V value = null; + try { + value = cacheLoader.load(key); + } catch (Exception ex) { + throw new CacheLoaderException(ex); + } + if (value != null) { + long startTime = currentNanoTime(); + putValueLocked(key, value); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + } + return value; + } + + private boolean putValueLocked(K key, Object value) { + double syncId = ThreadLocalRandom.current().nextDouble(); + + if (containsKey(key)) { + Long updateTimeout = getUpdateTimeout(); + List res = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "if ARGV[2] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local value = redis.call('hget', KEYS[1], ARGV[4]);" + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value), ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[7], syncMsg); " + + "return {0, syncs};" + + "elseif ARGV[2] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, syncs};" + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getCreatedChannelName(), getRemovedChannelName(), getUpdatedChannelName(), + getCreatedSyncChannelName(), getRemovedSyncChannelName(), getUpdatedSyncChannelName()), + 0, updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + res.add(syncId); + waitSync(res); + + return (Long) res.get(0) == 1; + } + + Long creationTimeout = getCreationTimeout(); + List res = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "if ARGV[1] == '0' then " + + "return {0};" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[6], syncMsg); " + + "return {1, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[6], syncMsg); " + + "return {1, syncs};" + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getCreatedChannelName(), getRemovedChannelName(), getUpdatedChannelName(), + getCreatedSyncChannelName(), getRemovedSyncChannelName(), getUpdatedSyncChannelName()), + creationTimeout, 0, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + res.add(syncId); + waitSync(res); + + return (Long) res.get(0) == 1; + + } + + + private boolean putValue(K key, Object value) { + double syncId = ThreadLocalRandom.current().nextDouble(); + Long creationTimeout = getCreationTimeout(); + Long updateTimeout = getUpdateTimeout(); + + List res = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "if redis.call('hexists', KEYS[1], ARGV[4]) == 1 then " + + "if ARGV[2] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local value = redis.call('hget', KEYS[1], ARGV[4]);" + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value), ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[7], syncMsg); " + + "return {0, syncs};" + + "elseif ARGV[2] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, syncs};" + + "end; " + + "else " + + "if ARGV[1] == '0' then " + + "return {0};" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[6], syncMsg); " + + "return {1, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[6], syncMsg); " + + "return {1, syncs};" + + "end; " + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getCreatedChannelName(), getRemovedChannelName(), getUpdatedChannelName(), + getCreatedSyncChannelName(), getRemovedSyncChannelName(), getUpdatedSyncChannelName()), + creationTimeout, updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + res.add(syncId); + waitSync(res); + + return (Long) res.get(0) == 1; + } + + private Long getUpdateTimeout() { + if (config.getExpiryPolicy().getExpiryForUpdate() == null) { + return -1L; + } + + Long updateTimeout = config.getExpiryPolicy().getExpiryForUpdate().getAdjustedTime(System.currentTimeMillis()); + if (config.getExpiryPolicy().getExpiryForUpdate().isZero()) { + updateTimeout = 0L; + } else if (updateTimeout.longValue() == Long.MAX_VALUE) { + updateTimeout = -1L; + } + return updateTimeout; + } + + private Long getCreationTimeout() { + if (config.getExpiryPolicy().getExpiryForCreation() == null) { + return -1L; + } + Long creationTimeout = config.getExpiryPolicy().getExpiryForCreation().getAdjustedTime(System.currentTimeMillis()); + if (config.getExpiryPolicy().getExpiryForCreation().isZero()) { + creationTimeout = 0L; + } else if (creationTimeout.longValue() == Long.MAX_VALUE) { + creationTimeout = -1L; + } + return creationTimeout; + } + + private boolean putIfAbsentValue(K key, Object value) { + Long creationTimeout = getCreationTimeout(); + + return (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT_IF_ABSENT, + "if redis.call('hexists', KEYS[1], ARGV[2]) == 1 then " + + "return 0; " + + "else " + + "if ARGV[1] == '0' then " + + "return 0;" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[2], ARGV[3]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[2]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[2]), ARGV[2], string.len(ARGV[3]), ARGV[3]); " + + "redis.call('publish', KEYS[3], msg); " + + "return 1;" + + "else " + + "redis.call('hset', KEYS[1], ARGV[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[2]), ARGV[2], string.len(ARGV[3]), ARGV[3]); " + + "redis.call('publish', KEYS[3], msg); " + + "return 1;" + + "end; " + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getCreatedChannelName()), + creationTimeout, key, value)); + } + + private boolean putIfAbsentValueLocked(K key, Object value) { + if (containsKey(key)) { + return false; + } + + Long creationTimeout = getCreationTimeout(); + return (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_PUT_IF_ABSENT, + "if ARGV[1] == '0' then " + + "return 0;" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[2], ARGV[3]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[2]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[2]), ARGV[2], string.len(ARGV[3]), ARGV[3]); " + + "redis.call('publish', KEYS[3], msg); " + + "return 1;" + + "else " + + "redis.call('hset', KEYS[1], ARGV[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[2]), ARGV[2], string.len(ARGV[3]), ARGV[3]); " + + "redis.call('publish', KEYS[3], msg); " + + "return 1;" + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getCreatedChannelName()), + creationTimeout, key, value)); + } + + + private String getLockName(Object key) { + byte[] keyState = encodeMapKey(key); + return "{" + getName() + "}:" + Hash.hashToBase64(keyState) + ":key"; + } + + @Override + public Map getAll(Set keys) { + checkNotClosed(); + if (keys == null) { + throw new NullPointerException(); + } + for (K key : keys) { + if (key == null) { + throw new NullPointerException(); + } + } + + long startTime = currentNanoTime(); + boolean exists = false; + for (K key : keys) { + if (containsKey(key)) { + exists = true; + } + } + if (!exists && !config.isReadThrough()) { + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + return Collections.emptyMap(); + } + + + Long accessTimeout = getAccessTimeout(); + + List args = new ArrayList(keys.size() + 2); + args.add(accessTimeout); + args.add(System.currentTimeMillis()); + args.addAll(keys); + + Map res = (Map) get(commandExecutor.evalWriteAsync(getName(), codec, new RedisCommand>("EVAL", new MapGetAllDecoder(args, 2, true), 8, ValueType.MAP_KEY, ValueType.MAP_VALUE), + "local expireHead = redis.call('zrange', KEYS[2], 0, 0, 'withscores');" + + "local accessTimeout = ARGV[1]; " + + "local currentTime = tonumber(ARGV[2]); " + + "local hasExpire = #expireHead == 2 and tonumber(expireHead[2]) <= currentTime; " + + "local map = redis.call('hmget', KEYS[1], unpack(ARGV, 3, #ARGV)); " + + "local result = {};" + + "for i, value in ipairs(map) do " + + "if value ~= false then " + + "local key = ARGV[i+2]; " + + + "if hasExpire then " + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], key); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + "if expireDate <= currentTime then " + + "value = false; " + + "end; " + + "end; " + + + "if accessTimeout == '0' then " + + "redis.call('hdel', KEYS[1], key); " + + "redis.call('zrem', KEYS[2], key); " + + "local msg = struct.pack('Lc0Lc0', string.len(key), key, string.len(value), value); " + + "redis.call('publish', KEYS[3], {key, value}); " + + "elseif accessTimeout ~= '-1' then " + + "redis.call('zadd', KEYS[2], accessTimeout, key); " + + "end; " + + "end; " + + + "table.insert(result, value); " + + "end; " + + "return result;", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName()), args.toArray())); + + Map result = new HashMap(); + for (Map.Entry entry : res.entrySet()) { + if (entry.getValue() != null) { + cacheManager.getStatBean(this).addHits(1); + result.put(entry.getKey(), entry.getValue()); + } else { + if (config.isReadThrough()) { + cacheManager.getStatBean(this).addMisses(1); + V value = load(entry.getKey()); + if (value != null) { + result.put(entry.getKey(), value); + } + } + } + } + + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + + return result; + } + + @Override + public boolean containsKey(K key) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + + return (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_CONTAINS_KEY, + "if redis.call('hexists', KEYS[1], ARGV[2]) == 0 then " + + "return 0;" + + "end;" + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[2]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[1]) then " + + "return 0; " + + "end; " + + "return 1;", + Arrays.asList(getName(), getTimeoutSetName()), + System.currentTimeMillis(), key)); + } + + @Override + public void loadAll(final Set keys, final boolean replaceExistingValues, final CompletionListener completionListener) { + checkNotClosed(); + if (keys == null) { + throw new NullPointerException(); + } + + for (K key : keys) { + if (key == null) { + throw new NullPointerException(); + } + } + + if (cacheLoader == null) { + if (completionListener != null) { + completionListener.onCompletion(); + } + return; + } + + commandExecutor.getConnectionManager().getExecutor().execute(new Runnable() { + @Override + public void run() { + for (K key : keys) { + try { + if (!containsKey(key) || replaceExistingValues) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + if (!containsKey(key)|| replaceExistingValues) { + V value; + try { + value = cacheLoader.load(key); + } catch (Exception ex) { + throw new CacheLoaderException(ex); + } + if (value != null) { + putValueLocked(key, value); + } + } + } finally { + lock.unlock(); + } + } + } catch (Exception e) { + if (completionListener != null) { + completionListener.onException(e); + } + return; + } + } + if (completionListener != null) { + completionListener.onCompletion(); + } + } + }); + } + + private RLock getLock(K key) { + String lockName = getLockName(key); + RLock lock = redisson.getLock(lockName); + return lock; + } + + private RLock getLockedLock(K key) { + String lockName = getLockName(key); + RLock lock = redisson.getLock(lockName); + lock.lock(30, TimeUnit.MINUTES); + return lock; + } + + + @Override + public void put(K key, V value) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (value == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + List result = getAndPutValueLocked(key, value); + if (result.isEmpty()) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return; + } + Long added = (Long) result.get(0); + if (added == null) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return; + } + + if (Long.valueOf(1).equals(added)) { + try { + cacheWriter.write(new JCacheEntry(key, value)); + } catch (CacheWriterException e) { + removeValues(key); + throw e; + } catch (Exception e) { + removeValues(key); + throw new CacheWriterException(e); + } + } else { + try { + cacheWriter.delete(key); + } catch (CacheWriterException e) { + if (result.size() == 4 && result.get(1) != null) { + putValue(key, result.get(1)); + } + throw e; + } catch (Exception e) { + if (result.size() == 4 && result.get(1) != null) { + putValue(key, result.get(1)); + } + throw new CacheWriterException(e); + } + } + cacheManager.getStatBean(this).addPuts(1); + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + boolean result = putValueLocked(key, value); + if (result) { + cacheManager.getStatBean(this).addPuts(1); + } + } finally { + lock.unlock(); + } + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + } + + private long removeValues(Object... keys) { + return (Long) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE_VALUES, + "redis.call('zrem', KEYS[2], unpack(ARGV)); " + + "return redis.call('hdel', KEYS[1], unpack(ARGV)); ", + Arrays.asList(getName(), getTimeoutSetName()), keys)); + } + + private List getAndPutValueLocked(K key, V value) { + double syncId = ThreadLocalRandom.current().nextDouble(); + if (containsKey(key)) { + Long updateTimeout = getUpdateTimeout(); + List result = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "local value = redis.call('hget', KEYS[1], ARGV[4]);" + + "if ARGV[2] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value), ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[6], syncMsg); " + + "return {0, value, syncs};" + + "elseif ARGV[2] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, value, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, value, syncs};" + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getCreatedChannelName(), getUpdatedChannelName(), + getRemovedSyncChannelName(), getCreatedSyncChannelName(), getUpdatedSyncChannelName()), + 0, updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + result.add(syncId); + waitSync(result); + return result; + } + + Long creationTimeout = getCreationTimeout(); + List result = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "if ARGV[1] == '0' then " + + "return {nil};" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[4], syncMsg); " + + "return {1, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[4], syncMsg); " + + "return {1, syncs};" + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getCreatedChannelName(), getCreatedSyncChannelName()), + creationTimeout, 0, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + result.add(syncId); + waitSync(result); + return result; + } + + private List getAndPutValue(K key, V value) { + Long creationTimeout = getCreationTimeout(); + + Long updateTimeout = getUpdateTimeout(); + + double syncId = ThreadLocalRandom.current().nextDouble(); + + List result = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "local value = redis.call('hget', KEYS[1], ARGV[4]);" + + "if value ~= false then " + + "if ARGV[2] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value), ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[6], syncMsg); " + + "return {0, value, syncs};" + + "elseif ARGV[2] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, value, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[5], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[8], syncMsg); " + + "return {1, value, syncs};" + + "end; " + + "else " + + "if ARGV[1] == '0' then " + + "return {nil};" + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[7], syncMsg); " + + "return {1, syncs};" + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[5]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[5]), ARGV[5], ARGV[6]); " + + "local syncs = redis.call('publish', KEYS[7], syncMsg); " + + "return {1, syncs};" + + "end; " + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getCreatedChannelName(), getUpdatedChannelName(), + getRemovedSyncChannelName(), getCreatedSyncChannelName(), getUpdatedSyncChannelName()), + creationTimeout, updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + if (!result.isEmpty()) { + result.add(syncId); + } + + return result; + } + + @Override + public V getAndPut(K key, V value) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (value == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + List result = getAndPutValueLocked(key, value); + if (result.isEmpty()) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addMisses(1); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return null; + } + Long added = (Long) result.get(0); + if (added == null) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return (V) result.get(1); + } + + if (Long.valueOf(1).equals(added)) { + try { + cacheWriter.write(new JCacheEntry(key, value)); + } catch (CacheWriterException e) { + removeValues(key); + throw e; + } catch (Exception e) { + removeValues(key); + throw new CacheWriterException(e); + } + } else { + try { + cacheWriter.delete(key); + } catch (CacheWriterException e) { + if (result.size() == 4 && result.get(1) != null) { + putValue(key, result.get(1)); + } + throw e; + } catch (Exception e) { + if (result.size() == 4 && result.get(1) != null) { + putValue(key, result.get(1)); + } + throw new CacheWriterException(e); + } + } + return getAndPutResult(startTime, result); + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + List result = getAndPutValueLocked(key, value); + return getAndPutResult(startTime, result); + } finally { + lock.unlock(); + } + } + } + + private V getAndPutResult(long startTime, List result) { + if (result.size() != 4) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addMisses(1); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return null; + } + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return (V) result.get(1); + } + + @Override + public void putAll(Map map) { + checkNotClosed(); + Map deletedKeys = new HashMap(); + Map> addedEntries = new HashMap>(); + + for (Map.Entry entry : map.entrySet()) { + K key = entry.getKey(); + if (key == null) { + throw new NullPointerException(); + } + V value = entry.getValue(); + if (value == null) { + throw new NullPointerException(); + } + } + + for (Map.Entry entry : map.entrySet()) { + K key = entry.getKey(); + V value = entry.getValue(); + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + + List result = getAndPutValue(key, value); + if (result.isEmpty()) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + continue; + } + Long added = (Long) result.get(0); + if (added == null) { + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + continue; + } + + if (Long.valueOf(1).equals(added)) { + addedEntries.put(key, new JCacheEntry(key, value)); + } else { + V val = null; + if (result.size() == 4) { + val = (V) result.get(1); + } + + deletedKeys.put(key, val); + } + cacheManager.getStatBean(this).addPuts(1); + waitSync(result); + } else { + boolean result = putValue(key, value); + if (result) { + cacheManager.getStatBean(this).addPuts(1); + } + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + } + + if (config.isWriteThrough()) { + try { + try { + cacheWriter.writeAll(addedEntries.values()); + } catch (CacheWriterException e) { + removeValues(addedEntries.keySet().toArray()); + throw e; + } catch (Exception e) { + removeValues(addedEntries.keySet().toArray()); + throw new CacheWriterException(e); + } + + try { + cacheWriter.deleteAll(deletedKeys.keySet()); + } catch (CacheWriterException e) { + for (Map.Entry deletedEntry : deletedKeys.entrySet()) { + if (deletedEntry.getValue() != null) { + putValue(deletedEntry.getKey(), deletedEntry.getValue()); + } + } + throw e; + } catch (Exception e) { + for (Map.Entry deletedEntry : deletedKeys.entrySet()) { + if (deletedEntry.getValue() != null) { + putValue(deletedEntry.getKey(), deletedEntry.getValue()); + } + } + throw new CacheWriterException(e); + } + } finally { + for (Map.Entry entry : map.entrySet()) { + getLock(entry.getKey()).unlock(); + } + } + } + } + + void waitSync(List result) { + if (result.size() < 2) { + return; + } + + Long syncs = (Long) result.get(result.size() - 2); + Double syncId = (Double) result.get(result.size() - 1); + if (syncs != null && syncs > 0) { + RSemaphore semaphore = redisson.getSemaphore(getSyncName(syncId)); + try { + semaphore.acquire(syncs.intValue()); + semaphore.delete(); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + } + } + } + + @Override + public boolean putIfAbsent(K key, V value) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (value == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + boolean result = putIfAbsentValueLocked(key, value); + if (result) { + cacheManager.getStatBean(this).addPuts(1); + try { + cacheWriter.write(new JCacheEntry(key, value)); + } catch (CacheWriterException e) { + removeValues(key); + throw e; + } catch (Exception e) { + removeValues(key); + throw new CacheWriterException(e); + } + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + boolean result = putIfAbsentValueLocked(key, value); + if (result) { + cacheManager.getStatBean(this).addPuts(1); + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } + } + + private boolean removeValue(K key) { + double syncId = ThreadLocalRandom.current().nextDouble(); + + List res = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "local value = redis.call('hexists', KEYS[1], ARGV[2]); " + + "if value == 0 then " + + "return {0}; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[2]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[1]) then " + + "return {0}; " + + "end; " + + + "value = redis.call('hget', KEYS[1], ARGV[2]); " + + "redis.call('hdel', KEYS[1], ARGV[2]); " + + "redis.call('zrem', KEYS[2], ARGV[2]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[2]), ARGV[2], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[2]), ARGV[2], string.len(tostring(value)), tostring(value), ARGV[3]); " + + "local syncs = redis.call('publish', KEYS[4], syncMsg); " + + "return {1, syncs};", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getRemovedSyncChannelName()), + System.currentTimeMillis(), encodeMapKey(key), syncId)); + + res.add(syncId); + waitSync(res); + + return (Long) res.get(0) == 1; + } + + + @Override + public boolean remove(K key) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + + long startTime = System.currentTimeMillis(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + V oldValue = getValue(key); + boolean result = removeValue(key); + try { + cacheWriter.delete(key); + } catch (CacheWriterException e) { + if (oldValue != null) { + putValue(key, oldValue); + } + throw e; + } catch (Exception e) { + if (oldValue != null) { + putValue(key, oldValue); + } + throw new CacheWriterException(e); + } + if (result) { + cacheManager.getStatBean(this).addRemovals(1); + } + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } else { + boolean result = removeValue(key); + if (result) { + cacheManager.getStatBean(this).addRemovals(1); + } + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return result; + } + + } + + private boolean removeValueLocked(K key, V value) { + + Boolean result = (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE_KEY_VALUE, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return 0; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return 0; " + + "end; " + + + "if ARGV[4] == value then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "return 1; " + + "end; " + + "return nil;", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName()), + 0, System.currentTimeMillis(), key, value)); + + if (result == null) { + + Long accessTimeout = getAccessTimeout(); + return (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE_KEY_VALUE, + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "end; " + + "return 0; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName()), + accessTimeout, System.currentTimeMillis(), key, value)); + } + + return result; + } + + private boolean removeValue(K key, V value) { + Long accessTimeout = getAccessTimeout(); + + return (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REMOVE_KEY_VALUE, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return 0; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return 0; " + + "end; " + + + "if ARGV[4] == value then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "return 1; " + + "end; " + + + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "end; " + + "return 0; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName()), + accessTimeout, System.currentTimeMillis(), key, value)); + } + + + @Override + public boolean remove(K key, V value) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (value == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + boolean result; + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + result = removeValueLocked(key, value); + if (result) { + try { + cacheWriter.delete(key); + } catch (CacheWriterException e) { + putValue(key, value); + throw e; + } catch (Exception e) { + putValue(key, value); + throw new CacheWriterException(e); + } + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addRemovals(1); + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return true; + } else { + cacheManager.getStatBean(this).addMisses(1); + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return false; + } + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + result = removeValueLocked(key, value); + if (result) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addRemovals(1); + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } + } + + private V getAndRemoveValue(K key) { + double syncId = ThreadLocalRandom.current().nextDouble(); + List result = (List) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_GET_REMOVE_VALUE_LIST, + "local value = redis.call('hget', KEYS[1], ARGV[2]); " + + "if value == false then " + + "return {nil}; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[2]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[1]) then " + + "return {nil}; " + + "end; " + + + "redis.call('hdel', KEYS[1], ARGV[2]); " + + "redis.call('zrem', KEYS[2], ARGV[2]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[2]), ARGV[2], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[2]), ARGV[2], string.len(tostring(value)), tostring(value), ARGV[3]); " + + "local syncs = redis.call('publish', KEYS[4], syncMsg); " + + "return {value, syncs}; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getRemovedSyncChannelName()), + System.currentTimeMillis(), encodeMapKey(key), syncId)); + + if (result.isEmpty()) { + return null; + } + + result.add(syncId); + waitSync(result); + + return (V) result.get(0); + } + + + @Override + public V getAndRemove(K key) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + Object value = getAndRemoveValue(key); + if (value != null) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addRemovals(1); + } else { + cacheManager.getStatBean(this).addMisses(1); + } + + try { + cacheWriter.delete(key); + } catch (CacheWriterException e) { + if (value != null) { + putValue(key, value); + } + throw e; + } catch (Exception e) { + if (value != null) { + putValue(key, value); + } + throw new CacheWriterException(e); + } + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return (V) value; + } finally { + lock.unlock(); + } + } else { + V value = getAndRemoveValue(key); + if (value != null) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addRemovals(1); + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + return value; + } + } + + private long replaceValueLocked(K key, V oldValue, V newValue) { + Long res = (Long) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REPLACE_OLD_NEW_VALUE, + "local value = redis.call('hget', KEYS[1], ARGV[4]); " + + "if value == false then " + + "return 0; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[4]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[3]) then " + + "return 0; " + + "end; " + + + "if ARGV[5] == value then " + + "return 1;" + + "end; " + + "return -1;", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName()), + 0, 0, System.currentTimeMillis(), key, oldValue, newValue)); + + if (res == 1) { + Long updateTimeout = getUpdateTimeout(); + double syncId = ThreadLocalRandom.current().nextDouble(); + Long syncs = (Long) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LONG, + "if ARGV[2] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local value = redis.call('hget', KEYS[1], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value), ARGV[7]); " + + "return redis.call('publish', KEYS[5], syncMsg); " + + "elseif ARGV[2] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[6]); " + + "redis.call('zadd', KEYS[2], ARGV[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[6]), ARGV[6]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[6]), ARGV[6], ARGV[7]); " + + "return redis.call('publish', KEYS[6], syncMsg); " + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[6]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[6]), ARGV[6]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(ARGV[6]), ARGV[6], ARGV[7]); " + + "return redis.call('publish', KEYS[6], syncMsg); " + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName(), + getRemovedSyncChannelName(), getUpdatedSyncChannelName()), + 0, updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(oldValue), encodeMapValue(newValue), syncId)); + + List result = Arrays.asList(syncs, syncId); + waitSync(result); + + return res; + } else if (res == 0) { + return res; + } + + Long accessTimeout = getAccessTimeout(); + + double syncId = ThreadLocalRandom.current().nextDouble(); + List result = (List) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LIST, + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local value = redis.call('hget', KEYS[1], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(value), value); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[4]), ARGV[4], string.len(value), value, ARGV[7]); " + + "local syncs = redis.call('publish', KEYS[4], syncMsg); " + + "return {-1, syncs}; " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "return {0};" + + "end; " + + "return {-1}; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getRemovedSyncChannelName()), + accessTimeout, 0, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(oldValue), encodeMapValue(newValue), syncId)); + + result.add(syncId); + waitSync(result); + return (Long) result.get(0); + } + + + private long replaceValue(K key, V oldValue, V newValue) { + Long accessTimeout = getAccessTimeout(); + + Long updateTimeout = getUpdateTimeout(); + + return (Long) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REPLACE_OLD_NEW_VALUE, + "local value = redis.call('hget', KEYS[1], ARGV[4]); " + + "if value == false then " + + "return 0; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[4]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[3]) then " + + "return 0; " + + "end; " + + + "if ARGV[5] == value then " + + "if ARGV[2] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[2] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[6]); " + + "redis.call('zadd', KEYS[2], ARGV[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[6]), ARGV[6]); " + + "redis.call('publish', KEYS[4], msg); " + + "else " + + "redis.call('hset', KEYS[1], ARGV[4], ARGV[6]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(ARGV[6]), ARGV[6]); " + + "redis.call('publish', KEYS[4], msg); " + + "end; " + + "return 1;" + + "end; " + + + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[4]); " + + "redis.call('zrem', KEYS[2], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[4]), ARGV[4], string.len(value), value); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "return 0;" + + "end; " + + "return -1; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName()), + accessTimeout, updateTimeout, System.currentTimeMillis(), key, oldValue, newValue)); + + } + + @Override + public boolean replace(K key, V oldValue, V newValue) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (oldValue == null) { + throw new NullPointerException(); + } + if (newValue == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + long result = replaceValueLocked(key, oldValue, newValue); + if (result == 1) { + try { + cacheWriter.write(new JCacheEntry(key, newValue)); + } catch (CacheWriterException e) { + removeValues(key); + throw e; + } catch (Exception e) { + removeValues(key); + throw new CacheWriterException(e); + } + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addPuts(1); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return true; + } else { + if (result == 0) { + cacheManager.getStatBean(this).addMisses(1); + } else { + cacheManager.getStatBean(this).addHits(1); + } + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return false; + } + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + long result = replaceValueLocked(key, oldValue, newValue); + if (result == 1) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addPuts(1); + } else if (result == 0){ + cacheManager.getStatBean(this).addMisses(1); + } else { + cacheManager.getStatBean(this).addHits(1); + } + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return result == 1; + } finally { + lock.unlock(); + } + } + } + + private boolean replaceValueLocked(K key, V value) { + + if (containsKey(key)) { + double syncId = ThreadLocalRandom.current().nextDouble(); + Long updateTimeout = getUpdateTimeout(); + Long syncs = (Long) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LONG, + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value), ARGV[5]); " + + "return redis.call('publish', KEYS[5], syncMsg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4], ARGV[5]); " + + "return redis.call('publish', KEYS[6], syncMsg); " + + "else " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4], ARGV[5]); " + + "return redis.call('publish', KEYS[6], syncMsg); " + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName(), + getRemovedSyncChannelName(), getUpdatedSyncChannelName()), + updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + List result = Arrays.asList(syncs, syncId); + waitSync(result); + return true; + } + + return false; + + } + + + private boolean replaceValue(K key, V value) { + Long updateTimeout = getUpdateTimeout(); + + return (Boolean) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_REPLACE_VALUE, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return 0; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return 0; " + + "end; " + + + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "else " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "end; " + + "return 1;", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName()), + updateTimeout, System.currentTimeMillis(), key, value)); + + } + + private V getAndReplaceValue(K key, V value) { + Long updateTimeout = getUpdateTimeout(); + + return (V) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_GET_REPLACE, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return nil; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return nil; " + + "end; " + + + "if ARGV[1] == '0' then " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "else " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "end; " + + "return value;", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName()), + updateTimeout, System.currentTimeMillis(), key, value)); + + } + + private V getAndReplaceValueLocked(K key, V value) { + V oldValue = (V) get(commandExecutor.evalWriteAsync(getName(), codec, EVAL_GET_REPLACE, + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "if value == false then " + + "return nil; " + + "end; " + + + "local expireDate = 92233720368547758; " + + "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[3]); " + + "if expireDateScore ~= false then " + + "expireDate = tonumber(expireDateScore); " + + "end; " + + + "if expireDate <= tonumber(ARGV[2]) then " + + "return nil; " + + "end; " + + + "return value;", Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName()), + 0, System.currentTimeMillis(), key, value)); + + if (oldValue != null) { + Long updateTimeout = getUpdateTimeout(); + double syncId = ThreadLocalRandom.current().nextDouble(); + Long syncs = (Long) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LONG, + "if ARGV[1] == '0' then " + + "local value = redis.call('hget', KEYS[1], ARGV[3]); " + + "redis.call('hdel', KEYS[1], ARGV[3]); " + + "redis.call('zrem', KEYS[2], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value)); " + + "redis.call('publish', KEYS[3], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(tostring(value)), tostring(value), ARGV[5]); " + + "return redis.call('publish', KEYS[5], msg); " + + "elseif ARGV[1] ~= '-1' then " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[3]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4], ARGV[5]); " + + "return redis.call('publish', KEYS[6], syncMsg); " + + "else " + + "redis.call('hset', KEYS[1], ARGV[3], ARGV[4]); " + + "local msg = struct.pack('Lc0Lc0', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4]); " + + "redis.call('publish', KEYS[4], msg); " + + "local syncMsg = struct.pack('Lc0Lc0d', string.len(ARGV[3]), ARGV[3], string.len(ARGV[4]), ARGV[4], ARGV[5]); " + + "return redis.call('publish', KEYS[6], syncMsg); " + + "end; ", + Arrays.asList(getName(), getTimeoutSetName(), getRemovedChannelName(), getUpdatedChannelName(), + getRemovedSyncChannelName(), getUpdatedSyncChannelName()), + updateTimeout, System.currentTimeMillis(), encodeMapKey(key), encodeMapValue(value), syncId)); + + List result = Arrays.asList(syncs, syncId); + waitSync(result); + } + return oldValue; + } + + + @Override + public boolean replace(K key, V value) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (value == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + boolean result = replaceValueLocked(key, value); + if (result) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addPuts(1); + try { + cacheWriter.write(new JCacheEntry(key, value)); + } catch (CacheWriterException e) { + removeValues(key); + throw e; + } catch (Exception e) { + removeValues(key); + throw new CacheWriterException(e); + } + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + boolean result = replaceValueLocked(key, value); + if (result) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addPuts(1); + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } + } + + @Override + public V getAndReplace(K key, V value) { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (value == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + try { + V result = getAndReplaceValueLocked(key, value); + if (result != null) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addPuts(1); + try { + cacheWriter.write(new JCacheEntry(key, value)); + } catch (CacheWriterException e) { + removeValues(key); + throw e; + } catch (Exception e) { + removeValues(key); + throw new CacheWriterException(e); + } + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } else { + RLock lock = getLockedLock(key); + try { + V result = getAndReplaceValueLocked(key, value); + if (result != null) { + cacheManager.getStatBean(this).addHits(1); + cacheManager.getStatBean(this).addPuts(1); + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addPutTime(currentNanoTime() - startTime); + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + return result; + } finally { + lock.unlock(); + } + } + } + + @Override + public void removeAll(Set keys) { + checkNotClosed(); + Map deletedKeys = new HashMap(); + + for (K key : keys) { + if (key == null) { + throw new NullPointerException(); + } + } + + long startTime = currentNanoTime(); + if (config.isWriteThrough()) { + for (K key : keys) { + RLock lock = getLock(key); + lock.lock(30, TimeUnit.MINUTES); + V result = getAndRemoveValue(key); + if (result != null) { + deletedKeys.put(key, result); + } + } + + try { + try { + cacheWriter.deleteAll(deletedKeys.keySet()); + } catch (CacheWriterException e) { + for (Map.Entry deletedEntry : deletedKeys.entrySet()) { + if (deletedEntry.getValue() != null) { + putValue(deletedEntry.getKey(), deletedEntry.getValue()); + } + } + throw e; + } catch (Exception e) { + for (Map.Entry deletedEntry : deletedKeys.entrySet()) { + if (deletedEntry.getValue() != null) { + putValue(deletedEntry.getKey(), deletedEntry.getValue()); + } + } + throw new CacheWriterException(e); + } + cacheManager.getStatBean(this).addRemovals(deletedKeys.size()); + } finally { + for (K key : keys) { + getLock(key).unlock(); + } + } + } else { + long removedKeys = removeValues(keys.toArray()); + cacheManager.getStatBean(this).addRemovals(removedKeys); + } + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + } + + MapScanResult scanIterator(String name, InetSocketAddress client, long startPos) { + RFuture> f + = commandExecutor.readAsync(client, name, new MapScanCodec(codec), RedisCommands.HSCAN, name, startPos); + return get(f); + } + + protected Iterator keyIterator() { + return new RedissonBaseMapIterator() { + @Override + protected K getValue(Map.Entry entry) { + return (K) entry.getKey().getObj(); + } + + @Override + protected MapScanResult iterator() { + return JCache.this.scanIterator(JCache.this.getName(), client, nextIterPos); + } + + @Override + protected void removeKey() { + throw new UnsupportedOperationException(); + } + + @Override + protected V put(Map.Entry entry, V value) { + throw new UnsupportedOperationException(); + } + }; + } + + @Override + public void removeAll() { + checkNotClosed(); + if (config.isWriteThrough()) { + for (Iterator iterator = keyIterator(); iterator.hasNext();) { + K key = iterator.next(); + remove(key); + } + } else { + long startTime = currentNanoTime(); + long removedObjects = (Long) get(commandExecutor.evalWriteAsync(getName(), codec, RedisCommands.EVAL_LONG, + "local expiredEntriesCount = redis.call('zcount', KEYS[2], 0, ARGV[1]); " + + "local result = 0; " + + "if expiredEntriesCount > 0 then " + + "result = redis.call('zcard', KEYS[2]) - expiredEntriesCount; " + + "else " + + "result = redis.call('hlen', KEYS[1]); " + + "end; " + + "redis.call('del', KEYS[1], KEYS[2]); " + + "return result; ", + Arrays.asList(getName(), getTimeoutSetName()), + System.currentTimeMillis())); + cacheManager.getStatBean(this).addRemovals(removedObjects); + cacheManager.getStatBean(this).addRemoveTime(currentNanoTime() - startTime); + } + } + + @Override + public void clear() { + checkNotClosed(); + get(commandExecutor.writeAsync(getName(), RedisCommands.DEL_OBJECTS, getName(), getTimeoutSetName())); + } + + @Override + public > C getConfiguration(Class clazz) { + if (clazz.isInstance(config)) { + return clazz.cast(config); + } + + throw new IllegalArgumentException("Configuration object is not an instance of " + clazz); + } + + @Override + public T invoke(K key, EntryProcessor entryProcessor, Object... arguments) + throws EntryProcessorException { + checkNotClosed(); + if (key == null) { + throw new NullPointerException(); + } + if (entryProcessor == null) { + throw new NullPointerException(); + } + + long startTime = currentNanoTime(); + if (containsKey(key)) { + cacheManager.getStatBean(this).addHits(1); + } else { + cacheManager.getStatBean(this).addMisses(1); + } + cacheManager.getStatBean(this).addGetTime(currentNanoTime() - startTime); + + JMutableEntry entry = new JMutableEntry(this, key, null, config.isReadThrough()); + + try { + T result = entryProcessor.process(entry, arguments); + if (entry.getAction() == Action.CREATED + || entry.getAction() == Action.UPDATED) { + put(key, entry.value()); + } + if (entry.getAction() == Action.DELETED) { + remove(key); + } + return result; + } catch (EntryProcessorException e) { + throw e; + } catch (Exception e) { + throw new EntryProcessorException(e); + } + } + + @Override + public Map> invokeAll(Set keys, EntryProcessor entryProcessor, + Object... arguments) { + checkNotClosed(); + if (entryProcessor == null) { + throw new NullPointerException(); + } + + Map> results = new HashMap>(); + for (K key : keys) { + try { + final T result = invoke(key, entryProcessor, arguments); + if (result != null) { + results.put(key, new EntryProcessorResult() { + @Override + public T get() throws EntryProcessorException { + return result; + } + }); + } + } catch (final EntryProcessorException e) { + results.put(key, new EntryProcessorResult() { + @Override + public T get() throws EntryProcessorException { + throw e; + } + }); + } + } + + return results; + } + + @Override + public CacheManager getCacheManager() { + checkNotClosed(); + return cacheManager; + } + + @Override + public void close() { + if (isClosed()) { + return; + } + + synchronized (cacheManager) { + if (!isClosed()) { + if (hasOwnRedisson) { + redisson.shutdown(); + } + cacheManager.closeCache(this); + for (CacheEntryListenerConfiguration config : listeners.keySet()) { + deregisterCacheEntryListener(config); + } + + closed = true; + } + } + } + + @Override + public boolean isClosed() { + return closed; + } + + @Override + public T unwrap(Class clazz) { + if (clazz.isAssignableFrom(getClass())) { + return clazz.cast(this); + } + + return null; + } + + @Override + public void registerCacheEntryListener(CacheEntryListenerConfiguration cacheEntryListenerConfiguration) { + registerCacheEntryListener(cacheEntryListenerConfiguration, true); + } + + private void registerCacheEntryListener(CacheEntryListenerConfiguration cacheEntryListenerConfiguration, boolean addToConfig) { + Factory> factory = cacheEntryListenerConfiguration.getCacheEntryListenerFactory(); + final CacheEntryListener listener = factory.create(); + + Factory> filterFactory = cacheEntryListenerConfiguration.getCacheEntryEventFilterFactory(); + final CacheEntryEventFilter filter; + if (filterFactory != null) { + filter = filterFactory.create(); + } else { + filter = null; + } + + Map values = new ConcurrentHashMap(); + + Map oldValues = listeners.putIfAbsent(cacheEntryListenerConfiguration, values); + if (oldValues != null) { + values = oldValues; + } + + final boolean sync = cacheEntryListenerConfiguration.isSynchronous(); + + if (CacheEntryRemovedListener.class.isAssignableFrom(listener.getClass())) { + String channelName = getRemovedChannelName(); + if (sync) { + channelName = getRemovedSyncChannelName(); + } + + RTopic> topic = redisson.getTopic(channelName, new JCacheEventCodec(codec, sync)); + int listenerId = topic.addListener(new MessageListener>() { + @Override + public void onMessage(String channel, List msg) { + JCacheEntryEvent event = new JCacheEntryEvent(JCache.this, EventType.REMOVED, msg.get(0), msg.get(1)); + try { + if (filter == null || filter.evaluate(event)) { + List> events = Collections.>singletonList(event); + ((CacheEntryRemovedListener) listener).onRemoved(events); + } + } finally { + sendSync(sync, msg); + } + } + }); + values.put(listenerId, channelName); + } + if (CacheEntryCreatedListener.class.isAssignableFrom(listener.getClass())) { + String channelName = getCreatedChannelName(); + if (sync) { + channelName = getCreatedSyncChannelName(); + } + + RTopic> topic = redisson.getTopic(channelName, new JCacheEventCodec(codec, sync)); + int listenerId = topic.addListener(new MessageListener>() { + @Override + public void onMessage(String channel, List msg) { + JCacheEntryEvent event = new JCacheEntryEvent(JCache.this, EventType.CREATED, msg.get(0), msg.get(1)); + try { + if (filter == null || filter.evaluate(event)) { + List> events = Collections.>singletonList(event); + ((CacheEntryCreatedListener) listener).onCreated(events); + } + } finally { + sendSync(sync, msg); + } + } + }); + values.put(listenerId, channelName); + } + if (CacheEntryUpdatedListener.class.isAssignableFrom(listener.getClass())) { + String channelName = getUpdatedChannelName(); + if (sync) { + channelName = getUpdatedSyncChannelName(); + } + + RTopic> topic = redisson.getTopic(channelName, new JCacheEventCodec(codec, sync)); + int listenerId = topic.addListener(new MessageListener>() { + @Override + public void onMessage(String channel, List msg) { + JCacheEntryEvent event = new JCacheEntryEvent(JCache.this, EventType.UPDATED, msg.get(0), msg.get(1)); + try { + if (filter == null || filter.evaluate(event)) { + List> events = Collections.>singletonList(event); + ((CacheEntryUpdatedListener) listener).onUpdated(events); + } + } finally { + sendSync(sync, msg); + } + } + }); + values.put(listenerId, channelName); + } + if (CacheEntryExpiredListener.class.isAssignableFrom(listener.getClass())) { + String channelName = getExpiredChannelName(); + + RTopic> topic = redisson.getTopic(channelName, new JCacheEventCodec(codec, false)); + int listenerId = topic.addListener(new MessageListener>() { + @Override + public void onMessage(String channel, List msg) { + JCacheEntryEvent event = new JCacheEntryEvent(JCache.this, EventType.EXPIRED, msg.get(0), msg.get(1)); + if (filter == null || filter.evaluate(event)) { + List> events = Collections.>singletonList(event); + ((CacheEntryExpiredListener) listener).onExpired(events); + } + } + }); + values.put(listenerId, channelName); + } + + if (addToConfig) { + config.addCacheEntryListenerConfiguration(cacheEntryListenerConfiguration); + } + } + + private void sendSync(boolean sync, List msg) { + if (sync) { + RSemaphore semaphore = redisson.getSemaphore(getSyncName(msg.get(2))); + semaphore.release(); + } + } + + @Override + public void deregisterCacheEntryListener(CacheEntryListenerConfiguration cacheEntryListenerConfiguration) { + Map listenerIds = listeners.remove(cacheEntryListenerConfiguration); + if (listenerIds != null) { + for (Map.Entry entry : listenerIds.entrySet()) { + redisson.getTopic(entry.getValue()).removeListener(entry.getKey()); + } + } + config.removeCacheEntryListenerConfiguration(cacheEntryListenerConfiguration); + } + + @Override + public Iterator> iterator() { + checkNotClosed(); + return new RedissonBaseMapIterator>() { + @Override + protected Cache.Entry getValue(Map.Entry entry) { + cacheManager.getStatBean(JCache.this).addHits(1); + Long accessTimeout = getAccessTimeout(); + JCacheEntry je = new JCacheEntry((K) entry.getKey().getObj(), (V) entry.getValue().getObj()); + if (accessTimeout == 0) { + remove(); + } else if (accessTimeout != -1) { + get(commandExecutor.writeAsync(getName(), RedisCommands.ZADD_BOOL, getTimeoutSetName(), accessTimeout, entry.getKey().getObj())); + } + return je; + } + + @Override + protected MapScanResult iterator() { + return JCache.this.scanIterator(JCache.this.getName(), client, nextIterPos); + } + + @Override + protected void removeKey() { + JCache.this.remove((K) entry.getKey().getObj()); + } + + @Override + protected V put(Map.Entry entry, V value) { + throw new UnsupportedOperationException(); + } + }; + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCacheEntry.java b/redisson/src/main/java/org/redisson/jcache/JCacheEntry.java new file mode 100644 index 000000000..f2bdafbb4 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCacheEntry.java @@ -0,0 +1,57 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import javax.cache.Cache; + +/** + * + * @author Nikita Koksharov + * + * @param key + * @param value + */ +public class JCacheEntry implements Cache.Entry { + + private final K key; + private final V value; + + public JCacheEntry(K key, V value) { + super(); + this.key = key; + this.value = value; + } + + @Override + public K getKey() { + return key; + } + + @Override + public V getValue() { + return value; + } + + @Override + public T unwrap(Class clazz) { + if (clazz.isAssignableFrom(getClass())) { + return clazz.cast(this); + } + + return null; + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCacheEntryEvent.java b/redisson/src/main/java/org/redisson/jcache/JCacheEntryEvent.java new file mode 100644 index 000000000..933c873fe --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCacheEntryEvent.java @@ -0,0 +1,74 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import javax.cache.Cache; +import javax.cache.event.CacheEntryEvent; +import javax.cache.event.EventType; + +/** + * Entry event element passed to EventListener of JCache object + * + * @author Nikita Koksharov + * + * @param key + * @param value + */ +public class JCacheEntryEvent extends CacheEntryEvent { + + private static final long serialVersionUID = -4601376694286796662L; + + private final Object key; + private final Object value; + + public JCacheEntryEvent(Cache source, EventType eventType, Object key, Object value) { + super(source, eventType); + this.key = key; + this.value = value; + } + + @Override + public K getKey() { + return (K) key; + } + + @Override + public V getValue() { + return (V) value; + } + + @Override + public T unwrap(Class clazz) { + if (clazz.isAssignableFrom(getClass())) { + return clazz.cast(this); + } + + return null; + } + + @Override + public V getOldValue() { + // TODO Auto-generated method stub + return null; + } + + @Override + public boolean isOldValueAvailable() { + // TODO Auto-generated method stub + return false; + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCacheEventCodec.java b/redisson/src/main/java/org/redisson/jcache/JCacheEventCodec.java new file mode 100644 index 000000000..945cd25a8 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCacheEventCodec.java @@ -0,0 +1,110 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import java.io.IOException; +import java.nio.ByteOrder; +import java.util.ArrayList; +import java.util.List; + +import org.redisson.client.codec.Codec; +import org.redisson.client.handler.State; +import org.redisson.client.protocol.Decoder; +import org.redisson.client.protocol.Encoder; + +import io.netty.buffer.ByteBuf; +import io.netty.util.internal.PlatformDependent; + +/** + * + * @author Nikita Koksharov + * + */ +public class JCacheEventCodec implements Codec { + + private final Codec codec; + private final boolean sync; + + private final Decoder decoder = new Decoder() { + @Override + public Object decode(ByteBuf buf, State state) throws IOException { + List result = new ArrayList(); + int keyLen; + if (PlatformDependent.isWindows()) { + keyLen = buf.readIntLE(); + } else { + keyLen = (int) buf.readLongLE(); + } + ByteBuf keyBuf = buf.readSlice(keyLen); + Object key = codec.getMapKeyDecoder().decode(keyBuf, state); + result.add(key); + + int valueLen; + if (PlatformDependent.isWindows()) { + valueLen = buf.readIntLE(); + } else { + valueLen = (int) buf.readLongLE(); + } + ByteBuf valueBuf = buf.readSlice(valueLen); + Object value = codec.getMapValueDecoder().decode(valueBuf, state); + result.add(value); + + if (sync) { + double syncId = buf.order(ByteOrder.LITTLE_ENDIAN).readDouble(); + result.add(syncId); + } + + return result; + } + }; + + public JCacheEventCodec(Codec codec, boolean sync) { + super(); + this.codec = codec; + this.sync = sync; + } + + @Override + public Decoder getMapValueDecoder() { + throw new UnsupportedOperationException(); + } + + @Override + public Encoder getMapValueEncoder() { + throw new UnsupportedOperationException(); + } + + @Override + public Decoder getMapKeyDecoder() { + throw new UnsupportedOperationException(); + } + + @Override + public Encoder getMapKeyEncoder() { + throw new UnsupportedOperationException(); + } + + @Override + public Decoder getValueDecoder() { + return decoder; + } + + @Override + public Encoder getValueEncoder() { + throw new UnsupportedOperationException(); + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCacheMBeanServerBuilder.java b/redisson/src/main/java/org/redisson/jcache/JCacheMBeanServerBuilder.java new file mode 100644 index 000000000..2b0d890ae --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCacheMBeanServerBuilder.java @@ -0,0 +1,121 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import javax.management.ListenerNotFoundException; +import javax.management.MBeanNotificationInfo; +import javax.management.MBeanServer; +import javax.management.MBeanServerBuilder; +import javax.management.MBeanServerDelegate; +import javax.management.Notification; +import javax.management.NotificationFilter; +import javax.management.NotificationListener; + + +/** + * + * @author Nikita Koksharov + * + */ +public final class JCacheMBeanServerBuilder extends MBeanServerBuilder { + + @Override + public MBeanServer newMBeanServer(String defaultDomain, MBeanServer outer, + MBeanServerDelegate delegate) { + MBeanServerDelegate wrappedDelegate = new JCacheMBeanServerDelegate(delegate); + MBeanServerBuilder builder = new MBeanServerBuilder(); + return builder.newMBeanServer(defaultDomain, outer, wrappedDelegate); + } + + public final class JCacheMBeanServerDelegate extends MBeanServerDelegate { + + private final MBeanServerDelegate delegate; + + public JCacheMBeanServerDelegate(MBeanServerDelegate delegate) { + this.delegate = delegate; + } + + @Override + public MBeanNotificationInfo[] getNotificationInfo() { + return delegate.getNotificationInfo(); + } + + @Override + public String getSpecificationName() { + return delegate.getSpecificationName(); + } + + @Override + public String getSpecificationVersion() { + return delegate.getSpecificationVersion(); + } + + @Override + public String getSpecificationVendor() { + return delegate.getSpecificationVendor(); + } + + @Override + public String getImplementationName() { + return delegate.getImplementationName(); + } + + @Override + public String getImplementationVersion() { + return delegate.getImplementationVersion(); + } + + @Override + public String getImplementationVendor() { + return delegate.getImplementationVendor(); + } + + @Override + public synchronized void addNotificationListener( + NotificationListener listener, NotificationFilter filter, Object handback) + throws IllegalArgumentException { + delegate.addNotificationListener(listener, filter, handback); + } + + @Override + public synchronized void removeNotificationListener( + NotificationListener listener, + NotificationFilter filter, + Object handback) throws + ListenerNotFoundException { + delegate.removeNotificationListener(listener, filter, handback); + } + + @Override + public synchronized void removeNotificationListener(NotificationListener + listener) throws + ListenerNotFoundException { + delegate.removeNotificationListener(listener); + } + + @Override + public void sendNotification(Notification notification) { + delegate.sendNotification(notification); + } + + @Override + public synchronized String getMBeanServerId() { + return System.getProperty("org.jsr107.tck.management.agentId"); + } + } + + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCacheManager.java b/redisson/src/main/java/org/redisson/jcache/JCacheManager.java new file mode 100644 index 000000000..a037e1717 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCacheManager.java @@ -0,0 +1,388 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import java.lang.management.ManagementFactory; +import java.net.URI; +import java.util.Collections; +import java.util.HashSet; +import java.util.Properties; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; + +import javax.cache.Cache; +import javax.cache.CacheException; +import javax.cache.CacheManager; +import javax.cache.configuration.CompleteConfiguration; +import javax.cache.configuration.Configuration; +import javax.cache.spi.CachingProvider; +import javax.management.InstanceAlreadyExistsException; +import javax.management.InstanceNotFoundException; +import javax.management.MBeanRegistrationException; +import javax.management.MBeanServer; +import javax.management.MalformedObjectNameException; +import javax.management.NotCompliantMBeanException; +import javax.management.ObjectName; + +import org.redisson.Redisson; +import org.redisson.api.RedissonClient; +import org.redisson.jcache.bean.EmptyStatisticsMXBean; +import org.redisson.jcache.bean.JCacheManagementMXBean; +import org.redisson.jcache.bean.JCacheStatisticsMXBean; +import org.redisson.jcache.configuration.JCacheConfiguration; +import org.redisson.jcache.configuration.RedissonConfiguration; + +/** + * + * @author Nikita Koksharov + * + */ +public class JCacheManager implements CacheManager { + + private static final EmptyStatisticsMXBean EMPTY_INSTANCE = new EmptyStatisticsMXBean(); + private static MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer(); + + private final ClassLoader classLoader; + private final CachingProvider cacheProvider; + private final Properties properties; + private final URI uri; + private final ConcurrentMap> caches = new ConcurrentHashMap>(); + private final ConcurrentMap, JCacheStatisticsMXBean> statBeans = new ConcurrentHashMap, JCacheStatisticsMXBean>(); + private final ConcurrentMap, JCacheManagementMXBean> managementBeans = new ConcurrentHashMap, JCacheManagementMXBean>(); + + private volatile boolean closed; + + private final Redisson redisson; + + JCacheManager(Redisson redisson, ClassLoader classLoader, CachingProvider cacheProvider, Properties properties, URI uri) { + super(); + this.classLoader = classLoader; + this.cacheProvider = cacheProvider; + this.properties = properties; + this.uri = uri; + this.redisson = redisson; + } + + @Override + public CachingProvider getCachingProvider() { + return cacheProvider; + } + + @Override + public URI getURI() { + return uri; + } + + @Override + public ClassLoader getClassLoader() { + return classLoader; + } + + @Override + public Properties getProperties() { + return properties; + } + + private void checkNotClosed() { + if (closed) { + throw new IllegalStateException(); + } + } + + @Override + public > Cache createCache(String cacheName, C configuration) + throws IllegalArgumentException { + checkNotClosed(); + Redisson cacheRedisson = redisson; + + if (cacheName == null) { + throw new NullPointerException(); + } + if (configuration == null) { + throw new NullPointerException(); + } + + if (cacheRedisson == null && !(configuration instanceof RedissonConfiguration)) { + throw new IllegalStateException("Default configuration hasn't been specified!"); + } + + boolean hasOwnRedisson = false; + if (configuration instanceof RedissonConfiguration) { + RedissonConfiguration rc = (RedissonConfiguration) configuration; + if (rc.getConfig() != null) { + cacheRedisson = (Redisson) Redisson.create(rc.getConfig()); + hasOwnRedisson = true; + } else { + cacheRedisson = (Redisson) rc.getRedisson(); + } + } + + JCacheConfiguration cfg = new JCacheConfiguration(configuration); + JCache cache = new JCache(this, cacheRedisson, cacheName, cfg, hasOwnRedisson); + JCache oldCache = caches.putIfAbsent(cacheName, cache); + if (oldCache != null) { + throw new CacheException("Cache " + cacheName + " already exists"); + } + if (cfg.isStatisticsEnabled()) { + enableStatistics(cacheName, true); + } + if (cfg.isManagementEnabled()) { + enableManagement(cacheName, true); + } + return cache; + } + + @Override + public Cache getCache(String cacheName, Class keyType, Class valueType) { + checkNotClosed(); + if (cacheName == null) { + throw new NullPointerException(); + } + if (keyType == null) { + throw new NullPointerException(); + } + if (valueType == null) { + throw new NullPointerException(); + } + + JCache cache = caches.get(cacheName); + if (cache == null) { + return null; + } + + if (!keyType.isAssignableFrom(cache.getConfiguration(CompleteConfiguration.class).getKeyType())) { + throw new ClassCastException("Wrong type of key for " + cacheName); + } + if (!valueType.isAssignableFrom(cache.getConfiguration(CompleteConfiguration.class).getValueType())) { + throw new ClassCastException("Wrong type of value for " + cacheName); + } + return (Cache) cache; + } + + @Override + public Cache getCache(String cacheName) { + checkNotClosed(); + Cache cache = (Cache) getCache(cacheName, Object.class, Object.class); + if (cache != null) { + if (cache.getConfiguration(CompleteConfiguration.class).getKeyType() != Object.class) { + throw new IllegalArgumentException("Wrong type of key for " + cacheName); + } + if (cache.getConfiguration(CompleteConfiguration.class).getValueType() != Object.class) { + throw new IllegalArgumentException("Wrong type of value for " + cacheName); + } + } + return cache; + } + + @Override + public Iterable getCacheNames() { + return Collections.unmodifiableSet(new HashSet(caches.keySet())); + } + + @Override + public void destroyCache(String cacheName) { + checkNotClosed(); + if (cacheName == null) { + throw new NullPointerException(); + } + + JCache cache = caches.get(cacheName); + if (cache != null) { + cache.clear(); + cache.close(); + } + } + + public void closeCache(JCache cache) { + caches.remove(cache.getName()); + unregisterStatisticsBean(cache); + unregisterManagementBean(cache); + } + + @Override + public void enableManagement(String cacheName, boolean enabled) { + checkNotClosed(); + if (cacheName == null) { + throw new NullPointerException(); + } + + JCache cache = caches.get(cacheName); + if (cache == null) { + throw new NullPointerException(); + } + + if (enabled) { + JCacheManagementMXBean statBean = managementBeans.get(cache); + if (statBean == null) { + statBean = new JCacheManagementMXBean(cache); + JCacheManagementMXBean oldBean = managementBeans.putIfAbsent(cache, statBean); + if (oldBean != null) { + statBean = oldBean; + } + } + try { + ObjectName objectName = queryNames("Configuration", cache); + if (mBeanServer.queryNames(objectName, null).isEmpty()) { + mBeanServer.registerMBean(statBean, objectName); + } + } catch (MalformedObjectNameException e) { + throw new CacheException(e); + } catch (InstanceAlreadyExistsException e) { + throw new CacheException(e); + } catch (MBeanRegistrationException e) { + throw new CacheException(e); + } catch (NotCompliantMBeanException e) { + throw new CacheException(e); + } + } else { + unregisterManagementBean(cache); + } + cache.getConfiguration(JCacheConfiguration.class).setManagementEnabled(enabled); + } + + private ObjectName queryNames(String baseName, JCache cache) throws MalformedObjectNameException { + String name = getName(baseName, cache); + return new ObjectName(name); + } + + private void unregisterManagementBean(JCache cache) { + JCacheManagementMXBean statBean = managementBeans.remove(cache); + if (statBean != null) { + try { + ObjectName name = queryNames("Configuration", cache); + for (ObjectName objectName : mBeanServer.queryNames(name, null)) { + mBeanServer.unregisterMBean(objectName); + } + } catch (MalformedObjectNameException e) { + throw new CacheException(e); + } catch (MBeanRegistrationException e) { + throw new CacheException(e); + } catch (InstanceNotFoundException e) { + throw new CacheException(e); + } + } + } + + public JCacheStatisticsMXBean getStatBean(JCache cache) { + JCacheStatisticsMXBean bean = statBeans.get(cache); + if (bean != null) { + return bean; + } + return EMPTY_INSTANCE; + } + + private String getName(String name, JCache cache) { + return "javax.cache:type=Cache" + name + ",CacheManager=" + + cache.getCacheManager().getURI().toString().replaceAll(",|:|=|\n", ".") + + ",Cache=" + cache.getName().replaceAll(",|:|=|\n", "."); + } + + @Override + public void enableStatistics(String cacheName, boolean enabled) { + checkNotClosed(); + if (cacheName == null) { + throw new NullPointerException(); + } + + JCache cache = caches.get(cacheName); + if (cache == null) { + throw new NullPointerException(); + } + + if (enabled) { + JCacheStatisticsMXBean statBean = statBeans.get(cache); + if (statBean == null) { + statBean = new JCacheStatisticsMXBean(); + JCacheStatisticsMXBean oldBean = statBeans.putIfAbsent(cache, statBean); + if (oldBean != null) { + statBean = oldBean; + } + } + try { + ObjectName objectName = queryNames("Statistics", cache); + if (!mBeanServer.isRegistered(objectName)) { + mBeanServer.registerMBean(statBean, objectName); + } + } catch (MalformedObjectNameException e) { + throw new CacheException(e); + } catch (InstanceAlreadyExistsException e) { + throw new CacheException(e); + } catch (MBeanRegistrationException e) { + throw new CacheException(e); + } catch (NotCompliantMBeanException e) { + throw new CacheException(e); + } + } else { + unregisterStatisticsBean(cache); + } + cache.getConfiguration(JCacheConfiguration.class).setStatisticsEnabled(enabled); + } + + private void unregisterStatisticsBean(JCache cache) { + JCacheStatisticsMXBean statBean = statBeans.remove(cache); + if (statBean != null) { + try { + ObjectName name = queryNames("Statistics", cache); + for (ObjectName objectName : mBeanServer.queryNames(name, null)) { + mBeanServer.unregisterMBean(objectName); + } + } catch (MalformedObjectNameException e) { + throw new CacheException(e); + } catch (MBeanRegistrationException e) { + throw new CacheException(e); + } catch (InstanceNotFoundException e) { + throw new CacheException(e); + } + } + } + + @Override + public void close() { + if (isClosed()) { + return; + } + + synchronized (cacheProvider) { + if (!isClosed()) { + cacheProvider.close(uri, classLoader); + for (Cache cache : caches.values()) { + try { + cache.close(); + } catch (Exception e) { + // skip + } + } + redisson.shutdown(); + closed = true; + } + } + } + + @Override + public boolean isClosed() { + return closed; + } + + @Override + public T unwrap(Class clazz) { + if (clazz.isAssignableFrom(getClass())) { + return clazz.cast(this); + } + throw new IllegalArgumentException(); + } + + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JCachingProvider.java b/redisson/src/main/java/org/redisson/jcache/JCachingProvider.java new file mode 100644 index 000000000..048a3c093 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JCachingProvider.java @@ -0,0 +1,195 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import java.io.IOException; +import java.net.URI; +import java.net.URISyntaxException; +import java.net.URL; +import java.util.Collections; +import java.util.Map; +import java.util.Properties; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; + +import javax.cache.CacheException; +import javax.cache.CacheManager; +import javax.cache.configuration.OptionalFeature; +import javax.cache.spi.CachingProvider; + +import org.redisson.Redisson; +import org.redisson.config.Config; + +/** + * + * @author Nikita Koksharov + * + */ +public class JCachingProvider implements CachingProvider { + + private final ConcurrentMap> managers = + new ConcurrentHashMap>(); + + private static final String DEFAULT_URI_PATH = "jsr107-default-config"; + private static URI defaulturi; + + static { + try { + defaulturi = new URI(DEFAULT_URI_PATH); + } catch (URISyntaxException e) { + throw new javax.cache.CacheException(e); + } + } + + @Override + public CacheManager getCacheManager(URI uri, ClassLoader classLoader, Properties properties) { + if (uri == null) { + uri = getDefaultURI(); + } + if (uri == null) { + throw new CacheException("Uri is not defined. Can't load default configuration"); + } + + if (classLoader == null) { + classLoader = getDefaultClassLoader(); + } + + ConcurrentMap value = new ConcurrentHashMap(); + ConcurrentMap oldValue = managers.putIfAbsent(classLoader, value); + if (oldValue != null) { + value = oldValue; + } + + CacheManager manager = value.get(uri); + if (manager != null) { + return manager; + } + + Config config = loadConfig(uri); + + Redisson redisson = null; + if (config != null) { + redisson = (Redisson) Redisson.create(config); + } + manager = new JCacheManager(redisson, classLoader, this, properties, uri); + CacheManager oldManager = value.putIfAbsent(uri, manager); + if (oldManager != null) { + if (redisson != null) { + redisson.shutdown(); + } + manager = oldManager; + } + return manager; + } + + private Config loadConfig(URI uri) { + Config config = null; + try { + URL jsonUrl = null; + if (DEFAULT_URI_PATH.equals(uri.getPath())) { + jsonUrl = JCachingProvider.class.getResource("/redisson-jcache.json"); + } else { + jsonUrl = uri.toURL(); + } + if (jsonUrl == null) { + throw new IOException(); + } + config = Config.fromJSON(jsonUrl); + } catch (IOException e) { + try { + URL yamlUrl = null; + if (DEFAULT_URI_PATH.equals(uri.getPath())) { + yamlUrl = JCachingProvider.class.getResource("/redisson-jcache.yaml"); + } else { + yamlUrl = uri.toURL(); + } + if (yamlUrl != null) { + config = Config.fromYAML(yamlUrl); + } + } catch (IOException e2) { + // skip + } + } + return config; + } + + @Override + public ClassLoader getDefaultClassLoader() { + return getClass().getClassLoader(); + } + + @Override + public URI getDefaultURI() { + return defaulturi; + } + + @Override + public Properties getDefaultProperties() { + return new Properties(); + } + + @Override + public CacheManager getCacheManager(URI uri, ClassLoader classLoader) { + return getCacheManager(uri, classLoader, getDefaultProperties()); + } + + @Override + public CacheManager getCacheManager() { + return getCacheManager(getDefaultURI(), getDefaultClassLoader()); + } + + @Override + public void close() { + synchronized (managers) { + for (ClassLoader classLoader : managers.keySet()) { + close(classLoader); + } + } + } + + @Override + public void close(ClassLoader classLoader) { + Map uri2manager = managers.remove(classLoader); + if (uri2manager != null) { + for (CacheManager manager : uri2manager.values()) { + manager.close(); + } + } + } + + @Override + public void close(URI uri, ClassLoader classLoader) { + Map uri2manager = managers.get(classLoader); + if (uri2manager == null) { + return; + } + CacheManager manager = uri2manager.remove(uri); + if (manager == null) { + return; + } + manager.close(); + if (uri2manager.isEmpty()) { + managers.remove(classLoader, Collections.emptyMap()); + } + } + + @Override + public boolean isSupported(OptionalFeature optionalFeature) { + // TODO implement support of store_by_reference + return false; + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/JMutableEntry.java b/redisson/src/main/java/org/redisson/jcache/JMutableEntry.java new file mode 100644 index 000000000..550eae54b --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/JMutableEntry.java @@ -0,0 +1,119 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache; + +import javax.cache.processor.MutableEntry; + +/** + * + * @author Nikita Koksharov + * + * @param key + * @param value + */ +public class JMutableEntry implements MutableEntry { + + public enum Action {CREATED, READ, UPDATED, DELETED, LOADED, SKIPPED} + + private final JCache jCache; + private final K key; + private boolean isReadThrough; + + private Action action = Action.SKIPPED; + private V value; + private boolean isValueRead; + + public JMutableEntry(JCache jCache, K key, V value, boolean isReadThrough) { + super(); + this.jCache = jCache; + this.key = key; + this.value = value; + this.isReadThrough = isReadThrough; + } + + @Override + public K getKey() { + return key; + } + + public V value() { + return value; + } + + @Override + public V getValue() { + if (action != Action.SKIPPED) { + return value; + } + + if (!isValueRead) { + value = jCache.getValueLocked(key); + isValueRead = true; + } + + if (value != null) { + action = Action.READ; + } else if (isReadThrough) { + value = jCache.load(key); + if (value != null) { + action = Action.LOADED; + } + isReadThrough = false; + } + return value; + } + + @Override + public T unwrap(Class clazz) { + return (T) this; + } + + @Override + public boolean exists() { + return getValue() != null; + } + + @Override + public void remove() { + if (action == Action.CREATED) { + action = Action.SKIPPED; + } else { + action = Action.DELETED; + } + value = null; + } + + @Override + public void setValue(V value) { + if (value == null) { + throw new NullPointerException(); + } + + if (action != Action.CREATED) { + if (jCache.containsKey(key)) { + action = Action.UPDATED; + } else { + action = Action.CREATED; + } + } + this.value = value; + } + + public Action getAction() { + return action; + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/bean/EmptyStatisticsMXBean.java b/redisson/src/main/java/org/redisson/jcache/bean/EmptyStatisticsMXBean.java new file mode 100644 index 000000000..742955842 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/bean/EmptyStatisticsMXBean.java @@ -0,0 +1,57 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache.bean; + +/** + * + * @author Nikita Koksharov + * + */ +public class EmptyStatisticsMXBean extends JCacheStatisticsMXBean { + + @Override + public void addEvictions(long value) { + } + + @Override + public void addGetTime(long value) { + } + + @Override + public void addHits(long value) { + } + + @Override + public void addMisses(long value) { + } + + @Override + public void addPuts(long value) { + } + + @Override + public void addPutTime(long value) { + } + + @Override + public void addRemovals(long value) { + } + + @Override + public void addRemoveTime(long value) { + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/bean/JCacheManagementMXBean.java b/redisson/src/main/java/org/redisson/jcache/bean/JCacheManagementMXBean.java new file mode 100644 index 000000000..2992a6215 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/bean/JCacheManagementMXBean.java @@ -0,0 +1,72 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache.bean; + +import javax.cache.configuration.CompleteConfiguration; +import javax.cache.management.CacheMXBean; + +import org.redisson.jcache.JCache; + +/** + * + * @author Nikita Koksharov + * + */ +public class JCacheManagementMXBean implements CacheMXBean { + + private final JCache cache; + + public JCacheManagementMXBean(JCache cache) { + super(); + this.cache = cache; + } + + @Override + public String getKeyType() { + return cache.getConfiguration(CompleteConfiguration.class).getKeyType().getName(); + } + + @Override + public String getValueType() { + return cache.getConfiguration(CompleteConfiguration.class).getValueType().getName(); + } + + @Override + public boolean isReadThrough() { + return cache.getConfiguration(CompleteConfiguration.class).isReadThrough(); + } + + @Override + public boolean isWriteThrough() { + return cache.getConfiguration(CompleteConfiguration.class).isWriteThrough(); + } + + @Override + public boolean isStoreByValue() { + return cache.getConfiguration(CompleteConfiguration.class).isStoreByValue(); + } + + @Override + public boolean isStatisticsEnabled() { + return cache.getConfiguration(CompleteConfiguration.class).isStatisticsEnabled(); + } + + @Override + public boolean isManagementEnabled() { + return cache.getConfiguration(CompleteConfiguration.class).isManagementEnabled(); + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/bean/JCacheStatisticsMXBean.java b/redisson/src/main/java/org/redisson/jcache/bean/JCacheStatisticsMXBean.java new file mode 100644 index 000000000..64cb829b6 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/bean/JCacheStatisticsMXBean.java @@ -0,0 +1,157 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache.bean; + +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; + +import javax.cache.management.CacheStatisticsMXBean; + +/** + * + * @author Nikita Koksharov + * + */ +public class JCacheStatisticsMXBean implements CacheStatisticsMXBean { + + private final AtomicLong removals = new AtomicLong(); + private final AtomicLong hits = new AtomicLong(); + private final AtomicLong puts = new AtomicLong(); + private final AtomicLong misses = new AtomicLong(); + private final AtomicLong evictions = new AtomicLong(); + + private final AtomicLong removeTime = new AtomicLong(); + private final AtomicLong getTime = new AtomicLong(); + private final AtomicLong putTime = new AtomicLong(); + + + @Override + public void clear() { + removals.set(0); + hits.set(0); + puts.set(0); + misses.set(0); + evictions.set(0); + + removeTime.set(0); + getTime.set(0); + putTime.set(0); + } + + public void addHits(long value) { + hits.addAndGet(value); + } + + @Override + public long getCacheHits() { + return hits.get(); + } + + @Override + public float getCacheHitPercentage() { + long gets = getCacheGets(); + if (gets == 0) { + return 0; + } + return (getCacheHits() * 100) / (float) gets; + } + + public void addMisses(long value) { + misses.addAndGet(value); + } + + @Override + public long getCacheMisses() { + return misses.get(); + } + + @Override + public float getCacheMissPercentage() { + long gets = getCacheGets(); + if (gets == 0) { + return 0; + } + return (getCacheMisses() * 100) / (float) gets; + } + + @Override + public long getCacheGets() { + return hits.get() + misses.get(); + } + + public void addPuts(long value) { + puts.addAndGet(value); + } + + @Override + public long getCachePuts() { + return puts.get(); + } + + public void addRemovals(long value) { + removals.addAndGet(value); + } + + @Override + public long getCacheRemovals() { + return removals.get(); + } + + public void addEvictions(long value) { + evictions.addAndGet(value); + } + + @Override + public long getCacheEvictions() { + return evictions.get(); + } + + private float get(long value, long timeInNanos) { + if (value == 0 || timeInNanos == 0) { + return 0; + } + long timeInMicrosec = TimeUnit.NANOSECONDS.toMicros(timeInNanos); + return timeInMicrosec / value; + } + + public void addGetTime(long value) { + getTime.addAndGet(value); + } + + @Override + public float getAverageGetTime() { + return get(getCacheGets(), getTime.get()); + } + + public void addPutTime(long value) { + putTime.addAndGet(value); + } + + @Override + public float getAveragePutTime() { + return get(getCachePuts(), putTime.get()); + } + + public void addRemoveTime(long value) { + removeTime.addAndGet(value); + } + + @Override + public float getAverageRemoveTime() { + return get(getCachePuts(), removeTime.get()); + } + +} diff --git a/redisson/src/main/java/org/redisson/jcache/configuration/JCacheConfiguration.java b/redisson/src/main/java/org/redisson/jcache/configuration/JCacheConfiguration.java new file mode 100644 index 000000000..bf35cbfb4 --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/configuration/JCacheConfiguration.java @@ -0,0 +1,147 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache.configuration; + +import javax.cache.configuration.CacheEntryListenerConfiguration; +import javax.cache.configuration.CompleteConfiguration; +import javax.cache.configuration.Configuration; +import javax.cache.configuration.Factory; +import javax.cache.configuration.MutableConfiguration; +import javax.cache.expiry.ExpiryPolicy; +import javax.cache.integration.CacheLoader; +import javax.cache.integration.CacheWriter; + +/** + * Configuration object for JCache {@link javax.cache.Cache} + * + * @author Nikita Koksharov + * + * @param key type + * @param value type + */ +public class JCacheConfiguration implements CompleteConfiguration { + + private static final long serialVersionUID = -7861479608049089078L; + + private final ExpiryPolicy expiryPolicy; + private final MutableConfiguration delegate; + + public JCacheConfiguration(Configuration configuration) { + if (configuration != null) { + if (configuration instanceof RedissonConfiguration) { + configuration = ((RedissonConfiguration)configuration).getJcacheConfig(); + } + + if (configuration instanceof CompleteConfiguration) { + delegate = new MutableConfiguration((CompleteConfiguration) configuration); + } else { + delegate = new MutableConfiguration(); + delegate.setStoreByValue(configuration.isStoreByValue()); + delegate.setTypes(configuration.getKeyType(), configuration.getValueType()); + } + } else { + delegate = new MutableConfiguration(); + } + + this.expiryPolicy = delegate.getExpiryPolicyFactory().create(); + } + + @Override + public Class getKeyType() { + if (delegate.getKeyType() == null) { + return (Class) Object.class; + } + return delegate.getKeyType(); + } + + @Override + public Class getValueType() { + if (delegate.getValueType() == null) { + return (Class) Object.class; + } + return delegate.getValueType(); + } + + @Override + public boolean isStoreByValue() { + return delegate.isStoreByValue(); + } + + @Override + public boolean isReadThrough() { + return delegate.isReadThrough(); + } + + @Override + public boolean isWriteThrough() { + return delegate.isWriteThrough(); + } + + @Override + public boolean isStatisticsEnabled() { + return delegate.isStatisticsEnabled(); + } + + public void setStatisticsEnabled(boolean enabled) { + delegate.setStatisticsEnabled(enabled); + } + + public void setManagementEnabled(boolean enabled) { + delegate.setManagementEnabled(enabled); + } + + @Override + public boolean isManagementEnabled() { + return delegate.isManagementEnabled(); + } + + @Override + public Iterable> getCacheEntryListenerConfigurations() { + return delegate.getCacheEntryListenerConfigurations(); + } + + public void addCacheEntryListenerConfiguration( + CacheEntryListenerConfiguration cacheEntryListenerConfiguration) { + delegate.addCacheEntryListenerConfiguration(cacheEntryListenerConfiguration); + } + + public void removeCacheEntryListenerConfiguration( + CacheEntryListenerConfiguration cacheEntryListenerConfiguration) { + delegate.removeCacheEntryListenerConfiguration(cacheEntryListenerConfiguration); + } + + @Override + public Factory> getCacheLoaderFactory() { + return delegate.getCacheLoaderFactory(); + } + + @Override + public Factory> getCacheWriterFactory() { + return delegate.getCacheWriterFactory(); + } + + @Override + public Factory getExpiryPolicyFactory() { + return delegate.getExpiryPolicyFactory(); + } + + public ExpiryPolicy getExpiryPolicy() { + return expiryPolicy; + } + + + +} diff --git a/redisson/src/main/java/org/redisson/jcache/configuration/RedissonConfiguration.java b/redisson/src/main/java/org/redisson/jcache/configuration/RedissonConfiguration.java new file mode 100644 index 000000000..345f8ec5b --- /dev/null +++ b/redisson/src/main/java/org/redisson/jcache/configuration/RedissonConfiguration.java @@ -0,0 +1,96 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.jcache.configuration; + +import javax.cache.configuration.Configuration; +import javax.cache.configuration.MutableConfiguration; + +import org.redisson.Redisson; +import org.redisson.api.RedissonClient; +import org.redisson.config.Config; + +/** + * + * @author Nikita Koksharov + * + * @param the type of key + * @param the type of value + */ +public class RedissonConfiguration implements Configuration { + + private static final long serialVersionUID = 5331107577281201157L; + + private Configuration jcacheConfig; + + private Config config; + private RedissonClient redisson; + + RedissonConfiguration(Config config, Configuration jcacheConfig) { + this.config = config; + this.jcacheConfig = jcacheConfig; + } + + RedissonConfiguration(RedissonClient redisson, Configuration jcacheConfig) { + this.redisson = redisson; + this.jcacheConfig = jcacheConfig; + } + + public static Configuration fromInstance(RedissonClient redisson) { + MutableConfiguration config = new MutableConfiguration(); + return fromInstance(redisson, config); + } + + public static Configuration fromInstance(RedissonClient redisson, Configuration jcacheConfig) { + return new RedissonConfiguration(redisson, jcacheConfig); + } + + public static Configuration fromConfig(Config config) { + MutableConfiguration jcacheConfig = new MutableConfiguration(); + return new RedissonConfiguration(config, jcacheConfig); + } + + public static Configuration fromConfig(Config config, Configuration jcacheConfig) { + return new RedissonConfiguration(config, jcacheConfig); + } + + public Configuration getJcacheConfig() { + return jcacheConfig; + } + + public RedissonClient getRedisson() { + return redisson; + } + + public Config getConfig() { + return config; + } + + @Override + public Class getKeyType() { + return (Class) Object.class; + } + + @Override + public Class getValueType() { + return (Class) Object.class; + } + + @Override + public boolean isStoreByValue() { + return true; + } + +} diff --git a/redisson/src/main/java/org/redisson/liveobject/core/AccessorInterceptor.java b/redisson/src/main/java/org/redisson/liveobject/core/AccessorInterceptor.java index 9d3b1ad87..9a8847cbe 100644 --- a/redisson/src/main/java/org/redisson/liveobject/core/AccessorInterceptor.java +++ b/redisson/src/main/java/org/redisson/liveobject/core/AccessorInterceptor.java @@ -151,13 +151,12 @@ public class AccessorInterceptor { } private boolean isGetter(Method method, String fieldName) { - return (method.getName().startsWith("get") || method.getName().startsWith("is")) - && method.getName().endsWith(getFieldNameSuffix(fieldName)); + return method.getName().equals("get" + getFieldNameSuffix(fieldName)) + || method.getName().equals("is" + getFieldNameSuffix(fieldName)); } private boolean isSetter(Method method, String fieldName) { - return method.getName().startsWith("set") - && method.getName().endsWith(getFieldNameSuffix(fieldName)); + return method.getName().equals("set" + getFieldNameSuffix(fieldName)); } private static String getFieldNameSuffix(String fieldName) { diff --git a/redisson/src/main/java/org/redisson/misc/LogHelper.java b/redisson/src/main/java/org/redisson/misc/LogHelper.java new file mode 100644 index 000000000..fe86507af --- /dev/null +++ b/redisson/src/main/java/org/redisson/misc/LogHelper.java @@ -0,0 +1,103 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.misc; + +import java.lang.reflect.Array; +import java.util.Collection; + +/** + * @author Philipp Marx + */ +public class LogHelper { + + private static final int MAX_COLLECTION_LOG_SIZE = Integer.valueOf(System.getProperty("redisson.maxCollectionLogSize", "10")); + private static final int MAX_STRING_LOG_SIZE = Integer.valueOf(System.getProperty("redisson.maxStringLogSize", "100")); + + private LogHelper() { + } + + public static String toString(T object) { + if (object == null) { + return "null"; + } else if (object instanceof String) { + return toStringString((String) object); + } else if (object.getClass().isArray()) { + return toArrayString(object); + } else if (object instanceof Collection) { + return toCollectionString((Collection) object); + } else { + return String.valueOf(object); + } + } + + private static String toStringString(String string) { + if (string.length() > MAX_STRING_LOG_SIZE) { + return new StringBuilder(string.substring(0, MAX_STRING_LOG_SIZE)).append("...").toString(); + } else { + return string; + } + } + + private static String toCollectionString(Collection collection) { + if (collection.isEmpty()) { + return "[]"; + } + + StringBuilder b = new StringBuilder(collection.size() * 3); + b.append('['); + int i = 0; + for (Object object : collection) { + b.append(toString(object)); + i++; + + if (i == collection.size()) { + b.append(']'); + break; + } + b.append(", "); + + if (i == MAX_COLLECTION_LOG_SIZE) { + b.append("...]"); + break; + } + } + + return b.toString(); + } + + private static String toArrayString(Object array) { + int length = Array.getLength(array) - 1; + if (length == -1) { + return "[]"; + } + + StringBuilder b = new StringBuilder(length * 3); + b.append('['); + for (int i = 0;; ++i) { + b.append(toString(Array.get(array, i))); + + if (i == length) { + return b.append(']').toString(); + } + + b.append(", "); + + if (i == MAX_COLLECTION_LOG_SIZE - 1) { + return b.append("...]").toString(); + } + } + } +} diff --git a/redisson/src/main/java/org/redisson/misc/URLBuilder.java b/redisson/src/main/java/org/redisson/misc/URLBuilder.java new file mode 100644 index 000000000..f27c189db --- /dev/null +++ b/redisson/src/main/java/org/redisson/misc/URLBuilder.java @@ -0,0 +1,125 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.misc; + +import java.io.IOException; +import java.lang.reflect.Field; +import java.net.InetSocketAddress; +import java.net.MalformedURLException; +import java.net.URL; +import java.net.URLConnection; +import java.net.URLStreamHandler; +import java.net.URLStreamHandlerFactory; + +/** + * + * @author Nikita Koksharov + * + */ +public class URLBuilder { + + private static URLStreamHandlerFactory currentFactory; + + public static void restoreURLFactory() { + try { + Field field = URL.class.getDeclaredField("factory"); + field.setAccessible(true); + field.set(null, currentFactory); + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + public static void replaceURLFactory() { + try { + Field field = URL.class.getDeclaredField("factory"); + field.setAccessible(true); + currentFactory = (URLStreamHandlerFactory) field.get(null); + if (currentFactory != null) { + field.set(null, null); + } + + URL.setURLStreamHandlerFactory(new URLStreamHandlerFactory() { + @Override + public URLStreamHandler createURLStreamHandler(String protocol) { + if ("redis".equals(protocol)) { + return new URLStreamHandler() { + @Override + protected URLConnection openConnection(URL u) throws IOException { + throw new UnsupportedOperationException(); + }; + + @Override + protected boolean equals(URL u1, URL u2) { + return u1.toString().equals(u2.toString()); + } + + @Override + protected int hashCode(URL u) { + return u.toString().hashCode(); + } + }; + } + + if (currentFactory != null) { + return currentFactory.createURLStreamHandler(protocol); + } + return null; + } + }); + } catch (Exception e) { + throw new IllegalStateException(e); + } + } + + public static InetSocketAddress toAddress(String url) { + String[] parts = url.split(":"); + if (parts.length-1 >= 3) { + String port = parts[parts.length-1]; + String newPort = port.split("[^\\d]")[0]; + String host = url.replace(":" + port, ""); + return new InetSocketAddress(host, Integer.valueOf(newPort)); + } else { + String port = parts[parts.length-1]; + String newPort = port.split("[^\\d]")[0]; + String host = url.replace(":" + port, ""); + return new InetSocketAddress(host, Integer.valueOf(newPort)); + } + } + + public static URL create(String url) { + replaceURLFactory(); + try { + String[] parts = url.split(":"); + if (parts.length-1 >= 3) { + String port = parts[parts.length-1]; + String newPort = port.split("[^\\d]")[0]; + String host = url.replace(":" + port, ""); + return new URL("redis://[" + host + "]:" + newPort); + } else { + String port = parts[parts.length-1]; + String newPort = port.split("[^\\d]")[0]; + String host = url.replace(":" + port, ""); + return new URL("redis://" + host + ":" + newPort); + } + } catch (MalformedURLException e) { + throw new IllegalArgumentException(e); + } finally { + restoreURLFactory(); + } + } + +} diff --git a/redisson/src/main/java/org/redisson/pubsub/AsyncSemaphore.java b/redisson/src/main/java/org/redisson/pubsub/AsyncSemaphore.java index 49f3d29b2..c4ad451f0 100644 --- a/redisson/src/main/java/org/redisson/pubsub/AsyncSemaphore.java +++ b/redisson/src/main/java/org/redisson/pubsub/AsyncSemaphore.java @@ -15,8 +15,11 @@ */ package org.redisson.pubsub; +import java.util.Iterator; +import java.util.LinkedHashSet; import java.util.LinkedList; import java.util.Queue; +import java.util.Set; import java.util.concurrent.CountDownLatch; /** @@ -27,7 +30,7 @@ import java.util.concurrent.CountDownLatch; public class AsyncSemaphore { private int counter; - private final Queue listeners = new LinkedList(); + private final Set listeners = new LinkedHashSet(); public AsyncSemaphore(int permits) { counter = permits; @@ -48,6 +51,12 @@ public class AsyncSemaphore { Thread.currentThread().interrupt(); } } + + public int queueSize() { + synchronized (this) { + return listeners.size(); + } + } public void acquire(Runnable listener) { boolean run = false; @@ -74,12 +83,20 @@ public class AsyncSemaphore { } } + public int getCounter() { + return counter; + } + public void release() { Runnable runnable = null; synchronized (this) { counter++; - runnable = listeners.poll(); + Iterator iter = listeners.iterator(); + if (iter.hasNext()) { + runnable = iter.next(); + iter.remove(); + } } if (runnable != null) { diff --git a/redisson/src/main/java/org/redisson/pubsub/CountDownLatchPubSub.java b/redisson/src/main/java/org/redisson/pubsub/CountDownLatchPubSub.java index 11ab3dd9a..f054c0f5d 100644 --- a/redisson/src/main/java/org/redisson/pubsub/CountDownLatchPubSub.java +++ b/redisson/src/main/java/org/redisson/pubsub/CountDownLatchPubSub.java @@ -19,6 +19,11 @@ import org.redisson.RedissonCountDownLatch; import org.redisson.RedissonCountDownLatchEntry; import org.redisson.misc.RPromise; +/** + * + * @author Nikita Koksharov + * + */ public class CountDownLatchPubSub extends PublishSubscribe { @Override diff --git a/redisson/src/main/java/org/redisson/pubsub/LockPubSub.java b/redisson/src/main/java/org/redisson/pubsub/LockPubSub.java index e7cdfb6ba..f765570f9 100644 --- a/redisson/src/main/java/org/redisson/pubsub/LockPubSub.java +++ b/redisson/src/main/java/org/redisson/pubsub/LockPubSub.java @@ -18,6 +18,11 @@ package org.redisson.pubsub; import org.redisson.RedissonLockEntry; import org.redisson.misc.RPromise; +/** + * + * @author Nikita Koksharov + * + */ public class LockPubSub extends PublishSubscribe { public static final Long unlockMessage = 0L; diff --git a/redisson/src/main/java/org/redisson/pubsub/PublishSubscribe.java b/redisson/src/main/java/org/redisson/pubsub/PublishSubscribe.java index 801902d46..d15bec4c0 100644 --- a/redisson/src/main/java/org/redisson/pubsub/PublishSubscribe.java +++ b/redisson/src/main/java/org/redisson/pubsub/PublishSubscribe.java @@ -30,6 +30,12 @@ import org.redisson.misc.RPromise; import io.netty.util.internal.PlatformDependent; +/** + * + * @author Nikita Koksharov + * + * @param + */ abstract class PublishSubscribe> { private final ConcurrentMap entries = PlatformDependent.newConcurrentHashMap(); diff --git a/redisson/src/main/java/org/redisson/pubsub/SemaphorePubSub.java b/redisson/src/main/java/org/redisson/pubsub/SemaphorePubSub.java index 85a846b6a..0d3c05490 100644 --- a/redisson/src/main/java/org/redisson/pubsub/SemaphorePubSub.java +++ b/redisson/src/main/java/org/redisson/pubsub/SemaphorePubSub.java @@ -18,6 +18,11 @@ package org.redisson.pubsub; import org.redisson.RedissonLockEntry; import org.redisson.misc.RPromise; +/** + * + * @author Nikita Koksharov + * + */ public class SemaphorePubSub extends PublishSubscribe { @Override diff --git a/redisson/src/main/java/org/redisson/reactive/ReactiveIterator.java b/redisson/src/main/java/org/redisson/reactive/ReactiveIterator.java new file mode 100644 index 000000000..768d80c01 --- /dev/null +++ b/redisson/src/main/java/org/redisson/reactive/ReactiveIterator.java @@ -0,0 +1,37 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.reactive; + +import java.net.InetSocketAddress; + +import org.reactivestreams.Publisher; +import org.redisson.client.protocol.decoder.MapScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; + +/** + * + * @author Nikita Koksharov + * + * @param key type + * @param value type + */ +interface MapReactive { + + Publisher> scanIteratorReactive(InetSocketAddress client, long startPos); + + Publisher put(K key, V value); + +} diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonBatchReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonBatchReactive.java index adfd96518..74eca37b4 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonBatchReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonBatchReactive.java @@ -16,10 +16,9 @@ package org.redisson.reactive; import java.util.List; +import java.util.UUID; import org.reactivestreams.Publisher; -import org.redisson.EvictionScheduler; -import org.redisson.Redisson; import org.redisson.api.RAtomicLongReactive; import org.redisson.api.RBatchReactive; import org.redisson.api.RBitSetReactive; @@ -42,13 +41,16 @@ import org.redisson.api.RedissonReactiveClient; import org.redisson.client.codec.Codec; import org.redisson.command.CommandBatchService; import org.redisson.connection.ConnectionManager; +import org.redisson.eviction.EvictionScheduler; public class RedissonBatchReactive implements RBatchReactive { private final EvictionScheduler evictionScheduler; private final CommandBatchService executorService; + private final UUID id; - public RedissonBatchReactive(EvictionScheduler evictionScheduler, ConnectionManager connectionManager) { + public RedissonBatchReactive(UUID id, EvictionScheduler evictionScheduler, ConnectionManager connectionManager) { + this.id = id; this.evictionScheduler = evictionScheduler; this.executorService = new CommandBatchService(connectionManager); } @@ -95,12 +97,12 @@ public class RedissonBatchReactive implements RBatchReactive { @Override public RMapCacheReactive getMapCache(String name, Codec codec) { - return new RedissonMapCacheReactive(codec, evictionScheduler, executorService, name); + return new RedissonMapCacheReactive(id, evictionScheduler, codec, executorService, name); } @Override public RMapCacheReactive getMapCache(String name) { - return new RedissonMapCacheReactive(evictionScheduler, executorService, name); + return new RedissonMapCacheReactive(id, evictionScheduler, executorService, name); } @Override diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonDequeReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonDequeReactive.java index 6b6dd9f66..e0de1ed26 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonDequeReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonDequeReactive.java @@ -22,8 +22,8 @@ import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.convertor.VoidReplayConvertor; +import org.redisson.client.protocol.decoder.ListFirstObjectDecoder; import org.redisson.command.CommandReactiveExecutor; -import org.redisson.connection.decoder.ListFirstObjectDecoder; /** * Distributed and concurrent implementation of {@link java.util.Queue} diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonKeysReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonKeysReactive.java index fe2cc94c8..90846936c 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonKeysReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonKeysReactive.java @@ -55,7 +55,7 @@ public class RedissonKeysReactive implements RKeysReactive { } @Override - public Publisher getKeysByPattern(final String pattern) { + public Publisher getKeysByPattern(String pattern) { List> publishers = new ArrayList>(); for (MasterSlaveEntry entry : commandExecutor.getConnectionManager().getEntrySet()) { publishers.add(createKeysIterator(entry, pattern)); diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonMapCacheReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonMapCacheReactive.java index dc7599072..19a56653e 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonMapCacheReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonMapCacheReactive.java @@ -16,39 +16,31 @@ package org.redisson.reactive; import java.net.InetSocketAddress; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; -import java.util.List; import java.util.Map; +import java.util.Map.Entry; import java.util.Set; +import java.util.UUID; import java.util.concurrent.TimeUnit; import org.reactivestreams.Publisher; -import org.reactivestreams.Subscription; -import org.redisson.EvictionScheduler; +import org.redisson.RedissonMapCache; +import org.redisson.api.RMapCache; import org.redisson.api.RMapCacheReactive; +import org.redisson.api.RMapReactive; import org.redisson.client.codec.Codec; -import org.redisson.client.codec.LongCodec; -import org.redisson.client.codec.ScanCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; -import org.redisson.client.protocol.RedisCommands; -import org.redisson.client.protocol.convertor.BooleanReplayConvertor; -import org.redisson.client.protocol.convertor.Convertor; import org.redisson.client.protocol.decoder.MapScanResult; import org.redisson.client.protocol.decoder.MapScanResultReplayDecoder; import org.redisson.client.protocol.decoder.NestedMultiDecoder; -import org.redisson.client.protocol.decoder.ObjectListReplayDecoder; import org.redisson.client.protocol.decoder.ObjectMapReplayDecoder; import org.redisson.client.protocol.decoder.ScanObjectEntry; -import org.redisson.client.protocol.decoder.TTLMapValueReplayDecoder; import org.redisson.command.CommandReactiveExecutor; -import org.redisson.connection.decoder.CacheGetAllDecoder; +import org.redisson.eviction.EvictionScheduler; -import reactor.rx.Promise; -import reactor.rx.Promises; -import reactor.rx.action.support.DefaultSubscriber; +import reactor.fn.BiFunction; +import reactor.fn.Function; +import reactor.rx.Streams; /** *

Map-based cache with ability to set TTL for each entry via @@ -59,7 +51,7 @@ import reactor.rx.action.support.DefaultSubscriber; * Thus entries are checked for TTL expiration during any key/value/entry read operation. * If key/value/entry expired then it doesn't returns and clean task runs asynchronous. * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * In addition there is {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* *

If eviction is not required then it's better to use {@link org.redisson.reactive.RedissonMapReactive}.

@@ -69,325 +61,246 @@ import reactor.rx.action.support.DefaultSubscriber; * @param key * @param value */ -public class RedissonMapCacheReactive extends RedissonMapReactive implements RMapCacheReactive { +public class RedissonMapCacheReactive extends RedissonExpirableReactive implements RMapCacheReactive, MapReactive { - private static final RedisCommand> EVAL_HSCAN = new RedisCommand>("EVAL", new NestedMultiDecoder(new ObjectMapReplayDecoder(), new MapScanResultReplayDecoder()), ValueType.MAP); - private static final RedisCommand EVAL_REMOVE = new RedisCommand("EVAL", 4, ValueType.MAP_KEY, ValueType.MAP_VALUE); - private static final RedisCommand EVAL_REMOVE_VALUE = new RedisCommand("EVAL", new BooleanReplayConvertor(), 5, ValueType.MAP); - private static final RedisCommand EVAL_PUT_TTL = new RedisCommand("EVAL", 6, ValueType.MAP, ValueType.MAP_VALUE); - private static final RedisCommand> EVAL_GET_TTL = new RedisCommand>("EVAL", new TTLMapValueReplayDecoder(), 5, ValueType.MAP_KEY, ValueType.MAP_VALUE); - private static final RedisCommand> EVAL_CONTAINS_KEY = new RedisCommand>("EVAL", new ObjectListReplayDecoder(), 5, ValueType.MAP_KEY); - private static final RedisCommand> EVAL_CONTAINS_VALUE = new RedisCommand>("EVAL", new ObjectListReplayDecoder(), 5, ValueType.MAP_VALUE); - private static final RedisCommand EVAL_FAST_REMOVE = new RedisCommand("EVAL", 5, ValueType.MAP_KEY); + private static final RedisCommand> EVAL_HSCAN = + new RedisCommand>("EVAL", new NestedMultiDecoder(new ObjectMapReplayDecoder(), new MapScanResultReplayDecoder()), ValueType.MAP); - private final EvictionScheduler evictionScheduler; + private final RMapCache mapCache; - public RedissonMapCacheReactive(EvictionScheduler evictionScheduler, CommandReactiveExecutor commandExecutor, String name) { + public RedissonMapCacheReactive(UUID id, EvictionScheduler evictionScheduler, CommandReactiveExecutor commandExecutor, String name) { super(commandExecutor, name); - this.evictionScheduler = evictionScheduler; - evictionScheduler.schedule(getName(), getTimeoutSetName()); + this.mapCache = new RedissonMapCache(id, evictionScheduler, commandExecutor, name); } - public RedissonMapCacheReactive(Codec codec, EvictionScheduler evictionScheduler, CommandReactiveExecutor commandExecutor, String name) { + public RedissonMapCacheReactive(UUID id, EvictionScheduler evictionScheduler, Codec codec, CommandReactiveExecutor commandExecutor, String name) { super(codec, commandExecutor, name); - this.evictionScheduler = evictionScheduler; - evictionScheduler.schedule(getName(), getTimeoutSetName()); + this.mapCache = new RedissonMapCache(id, codec, evictionScheduler, commandExecutor, name); } @Override public Publisher containsKey(Object key) { - Promise result = Promises.prepare(); - - Publisher> future = commandExecutor.evalReadReactive(getName(), codec, EVAL_CONTAINS_KEY, - "local value = redis.call('hexists', KEYS[1], ARGV[1]); " + - "local expireDate = 92233720368547758; " + - "if value == 1 then " + - "local expireDateScore = redis.call('zscore', KEYS[2], ARGV[1]); " - + "if expireDateScore ~= false then " - + "expireDate = tonumber(expireDateScore) " - + "end; " + - "end;" + - "return {expireDate, value}; ", - Arrays.asList(getName(), getTimeoutSetName()), key); - - addExpireListener(result, future, new BooleanReplayConvertor(), false); - - return result; + return reactive(mapCache.containsKeyAsync(key)); } @Override public Publisher containsValue(Object value) { - Promise result = Promises.prepare(); - - Publisher> future = commandExecutor.evalReadReactive(getName(), codec, EVAL_CONTAINS_VALUE, - "local s = redis.call('hgetall', KEYS[1]);" + - "for i, v in ipairs(s) do " - + "if i % 2 == 0 and ARGV[1] == v then " - + "local key = s[i-1];" - + "local expireDate = redis.call('zscore', KEYS[2], key); " - + "if expireDate == false then " - + "expireDate = 92233720368547758 " - + "else " - + "expireDate = tonumber(expireDate) " - + "end; " - + "return {expireDate, 1}; " - + "end " - + "end;" + - "return {92233720368547758, 0};", - Arrays.asList(getName(), getTimeoutSetName()), value); - - addExpireListener(result, future, new BooleanReplayConvertor(), false); - - return result; + return reactive(mapCache.containsValueAsync(value)); } @Override public Publisher> getAll(Set keys) { - if (keys.isEmpty()) { - return newSucceeded(Collections.emptyMap()); - } - - List args = new ArrayList(keys.size() + 2); - args.add(System.currentTimeMillis()); - args.addAll(keys); - - final Promise> result = Promises.prepare(); - Publisher> publisher = commandExecutor.evalReadReactive(getName(), codec, new RedisCommand>("EVAL", new CacheGetAllDecoder(args), 6, ValueType.MAP_KEY, ValueType.MAP_VALUE), - "local expireHead = redis.call('zrange', KEYS[2], 0, 0, 'withscores');" + - "local maxDate = table.remove(ARGV, 1); " // index is the first parameter - + "local minExpireDate = 92233720368547758;" + - "if #expireHead == 2 and tonumber(expireHead[2]) <= tonumber(maxDate) then " - + "for i, key in pairs(ARGV) do " - + "local expireDate = redis.call('zscore', KEYS[2], key); " - + "if expireDate ~= false and tonumber(expireDate) <= tonumber(maxDate) then " - + "minExpireDate = math.min(tonumber(expireDate), minExpireDate); " - + "ARGV[i] = ARGV[i] .. '__redisson__skip' " - + "end;" - + "end;" - + "end; " + - "return {minExpireDate, unpack(redis.call('hmget', KEYS[1], unpack(ARGV)))};", - Arrays.asList(getName(), getTimeoutSetName()), args.toArray()); - - publisher.subscribe(new DefaultSubscriber>() { - - @Override - public void onSubscribe(Subscription s) { - s.request(1); - } - - @Override - public void onNext(List res) { - Long expireDate = (Long) res.get(0); - long currentDate = System.currentTimeMillis(); - if (expireDate <= currentDate) { - evictionScheduler.runCleanTask(getName(), getTimeoutSetName(), currentDate); - } - - result.onNext((Map) res.get(1)); - result.onComplete(); - } - - @Override - public void onError(Throwable t) { - result.onError(t); - } - - }); - - return result; - + return reactive(mapCache.getAllAsync(keys)); } @Override public Publisher putIfAbsent(K key, V value, long ttl, TimeUnit unit) { - if (ttl < 0) { - throw new IllegalArgumentException("TTL can't be negative"); - } - if (ttl == 0) { - return putIfAbsent(key, value); - } - - if (unit == null) { - throw new NullPointerException("TimeUnit param can't be null"); - } - - long timeoutDate = System.currentTimeMillis() + unit.toMillis(ttl); - return commandExecutor.evalWriteReactive(getName(), codec, EVAL_PUT_TTL, - "if redis.call('hexists', KEYS[1], ARGV[2]) == 0 then " - + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[2]); " - + "redis.call('hset', KEYS[1], ARGV[2], ARGV[3]); " - + "return nil " - + "else " - + "return redis.call('hget', KEYS[1], ARGV[2]) " - + "end", - Arrays.asList(getName(), getTimeoutSetName()), timeoutDate, key, value); + return reactive(mapCache.putIfAbsentAsync(key, value, ttl, unit)); } @Override public Publisher remove(Object key, Object value) { - return commandExecutor.evalWriteReactive(getName(), codec, EVAL_REMOVE_VALUE, - "if redis.call('hget', KEYS[1], ARGV[1]) == ARGV[2] then " - + "redis.call('zrem', KEYS[2], ARGV[1]); " - + "return redis.call('hdel', KEYS[1], ARGV[1]); " - + "else " - + "return 0 " - + "end", - Arrays.asList(getName(), getTimeoutSetName()), key, value); + return reactive(mapCache.removeAsync(key, value)); } @Override public Publisher get(K key) { - Promise result = Promises.prepare(); + return reactive(mapCache.getAsync(key)); + } - Publisher> future = commandExecutor.evalReadReactive(getName(), codec, EVAL_GET_TTL, - "local value = redis.call('hget', KEYS[1], ARGV[1]); " + - "local expireDate = redis.call('zscore', KEYS[2], ARGV[1]); " - + "if expireDate == false then " - + "expireDate = 92233720368547758; " - + "end; " + - "return {expireDate, value}; ", - Arrays.asList(getName(), getTimeoutSetName()), key); + @Override + public Publisher put(K key, V value, long ttl, TimeUnit unit) { + return reactive(mapCache.putAsync(key, value, ttl, unit)); + } - addExpireListener(result, future, null, null); + String getTimeoutSetName() { + return "redisson__timeout__set__{" + getName() + "}"; + } - return result; + @Override + public Publisher remove(K key) { + return reactive(mapCache.removeAsync(key)); } - private void addExpireListener(final Promise result, Publisher> publisher, final Convertor convertor, final T nullValue) { - publisher.subscribe(new DefaultSubscriber>() { + @Override + public Publisher fastRemove(K ... keys) { + return reactive(mapCache.fastRemoveAsync(keys)); + } - @Override - public void onSubscribe(Subscription s) { - s.request(1); - } + @Override + public Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { + return reactive(((RedissonMapCache)mapCache).scanIteratorAsync(getName(), client, startPos)); + } - @Override - public void onNext(List res) { - Long expireDate = (Long) res.get(0); - long currentDate = System.currentTimeMillis(); - if (expireDate <= currentDate) { - result.onNext(nullValue); - result.onComplete(); - evictionScheduler.runCleanTask(getName(), getTimeoutSetName(), currentDate); - return; - } + @Override + public Publisher delete() { + return reactive(mapCache.deleteAsync()); + } - if (convertor != null) { - result.onNext((T) convertor.convert(res.get(1))); - } else { - result.onNext((T) res.get(1)); - } - result.onComplete(); - } + @Override + public Publisher expire(long timeToLive, TimeUnit timeUnit) { + return reactive(mapCache.expireAsync(timeToLive, timeUnit)); + } - @Override - public void onError(Throwable t) { - result.onError(t); - } + @Override + public Publisher expireAt(long timestamp) { + return reactive(mapCache.expireAtAsync(timestamp)); + } - }); + @Override + public Publisher clearExpire() { + return reactive(mapCache.clearExpireAsync()); } @Override - public Publisher put(K key, V value, long ttl, TimeUnit unit) { - if (ttl < 0) { - throw new IllegalArgumentException("TTL can't be negative"); - } - if (ttl == 0) { - return put(key, value); - } + public Publisher putAll(Map map) { + return reactive(mapCache.putAllAsync(map)); + } - if (unit == null) { - throw new NullPointerException("TimeUnit param can't be null"); - } + @Override + public Publisher addAndGet(K key, Number delta) { + return reactive(mapCache.addAndGetAsync(key, delta)); + } - long timeoutDate = System.currentTimeMillis() + unit.toMillis(ttl); - return commandExecutor.evalWriteReactive(getName(), codec, EVAL_PUT_TTL, - "local v = redis.call('hget', KEYS[1], ARGV[2]); " - + "redis.call('zadd', KEYS[2], ARGV[1], ARGV[2]); " - + "redis.call('hset', KEYS[1], ARGV[2], ARGV[3]); " - + "return v", - Arrays.asList(getName(), getTimeoutSetName()), timeoutDate, key, value); + @Override + public Publisher fastPut(K key, V value) { + return reactive(mapCache.fastPutAsync(key, value)); } - String getTimeoutSetName() { - return "redisson__timeout__set__{" + getName() + "}"; + @Override + public Publisher put(K key, V value) { + return reactive(mapCache.putAsync(key, value)); } @Override - public Publisher remove(K key) { - return commandExecutor.evalWriteReactive(getName(), codec, EVAL_REMOVE, - "local v = redis.call('hget', KEYS[1], ARGV[1]); " - + "redis.call('zrem', KEYS[2], ARGV[1]); " - + "redis.call('hdel', KEYS[1], ARGV[1]); " - + "return v", - Arrays.asList(getName(), getTimeoutSetName()), key); + public Publisher replace(K key, V value) { + return reactive(mapCache.replaceAsync(key, value)); } @Override - public Publisher fastRemove(K ... keys) { - if (keys == null || keys.length == 0) { - return newSucceeded(0L); - } + public Publisher replace(K key, V oldValue, V newValue) { + return reactive(mapCache.replaceAsync(key, oldValue, newValue)); + } - return commandExecutor.evalWriteReactive(getName(), codec, EVAL_FAST_REMOVE, - "local r = 0;" - + "for i=1, #ARGV,5000 do " - + "r += redis.call('hdel', KEYS[1], unpack(ARGV, i, math.min(i+4999, #ARGV))); " - + "redis.call('zrem', KEYS[2], unpack(ARGV, i, math.min(i+4999, #ARGV))); " - + "end " - + "return r;", - Arrays.asList(getName(), getTimeoutSetName()), keys); + @Override + public Publisher putIfAbsent(K key, V value) { + return reactive(mapCache.putIfAbsentAsync(key, value)); } @Override - Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { - return commandExecutor.evalReadReactive(client, getName(), new ScanCodec(codec), EVAL_HSCAN, - "local result = {}; " - + "local res = redis.call('hscan', KEYS[1], ARGV[1]); " - + "for i, value in ipairs(res[2]) do " - + "if i % 2 == 0 then " - + "local key = res[2][i-1]; " - + "local expireDate = redis.call('zscore', KEYS[2], key); " - + "if (expireDate == false) or (expireDate ~= false and tonumber(expireDate) > tonumber(ARGV[2])) then " - + "table.insert(result, key); " - + "table.insert(result, value); " - + "end; " - + "end; " - + "end;" - + "return {res[1], result};", Arrays.asList(getName(), getTimeoutSetName()), startPos, System.currentTimeMillis()); + public Publisher> entryIterator() { + return new RedissonMapReactiveIterator>(this).stream(); } @Override - public Publisher delete() { - return commandExecutor.writeReactive(getName(), RedisCommands.DEL_OBJECTS, getName(), getTimeoutSetName()); + public Publisher valueIterator() { + return new RedissonMapReactiveIterator(this) { + @Override + V getValue(Entry entry) { + return (V) entry.getValue().getObj(); + } + }.stream(); } @Override - public Publisher expire(long timeToLive, TimeUnit timeUnit) { - return commandExecutor.evalWriteReactive(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, - "redis.call('zadd', KEYS[2], 92233720368547758, 'redisson__expiretag');" + - "redis.call('pexpire', KEYS[2], ARGV[1]); " + - "return redis.call('pexpire', KEYS[1], ARGV[1]); ", - Arrays.asList(getName(), getTimeoutSetName()), timeUnit.toMillis(timeToLive)); + public Publisher keyIterator() { + return new RedissonMapReactiveIterator(this) { + @Override + K getValue(Entry entry) { + return (K) entry.getKey().getObj(); + } + }.stream(); } @Override - public Publisher expireAt(long timestamp) { - return commandExecutor.evalWriteReactive(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, - "redis.call('zadd', KEYS[2], 92233720368547758, 'redisson__expiretag');" + - "redis.call('pexpireat', KEYS[2], ARGV[1]); " + - "return redis.call('pexpireat', KEYS[1], ARGV[1]); ", - Arrays.asList(getName(), getTimeoutSetName()), timestamp); + public Publisher size() { + return reactive(mapCache.sizeAsync()); } @Override - public Publisher clearExpire() { - return commandExecutor.evalWriteReactive(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, - "redis.call('zrem', KEYS[2], 'redisson__expiretag'); " + - "redis.call('persist', KEYS[2]); " + - "return redis.call('persist', KEYS[1]); ", - Arrays.asList(getName(), getTimeoutSetName())); + public boolean equals(Object o) { + if (o == this) + return true; + + if (o instanceof Map) { + final Map m = (Map) o; + if (m.size() != Streams.create(size()).next().poll()) { + return false; + } + + return Streams.create(entryIterator()).map(mapFunction(m)).reduce(true, booleanAnd()).next().poll(); + } else if (o instanceof RMapReactive) { + final RMapReactive m = (RMapReactive) o; + if (Streams.create(m.size()).next().poll() != Streams.create(size()).next().poll()) { + return false; + } + + return Streams.create(entryIterator()).map(mapFunction(m)).reduce(true, booleanAnd()).next().poll(); + } + + return true; + } + + private BiFunction booleanAnd() { + return new BiFunction() { + + @Override + public Boolean apply(Boolean t, Boolean u) { + return t & u; + } + }; + } + + private Function, Boolean> mapFunction(final Map m) { + return new Function, Boolean>() { + @Override + public Boolean apply(Entry e) { + K key = e.getKey(); + V value = e.getValue(); + if (value == null) { + if (!(m.get(key)==null && m.containsKey(key))) + return false; + } else { + if (!value.equals(m.get(key))) + return false; + } + return true; + } + }; + } + + private Function, Boolean> mapFunction(final RMapReactive m) { + return new Function, Boolean>() { + @Override + public Boolean apply(Entry e) { + Object key = e.getKey(); + Object value = e.getValue(); + if (value == null) { + if (!(Streams.create(m.get(key)).next().poll() ==null && Streams.create(m.containsKey(key)).next().poll())) + return false; + } else { + if (!value.equals(Streams.create(m.get(key)).next().poll())) + return false; + } + return true; + } + }; + } + + @Override + public int hashCode() { + return Streams.create(entryIterator()).map(new Function, Integer>() { + @Override + public Integer apply(Entry t) { + return t.hashCode(); + } + }).reduce(0, new BiFunction() { + + @Override + public Integer apply(Integer t, Integer u) { + return t + u; + } + }).next().poll(); } } diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonMapReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonMapReactive.java index 918e330dc..96af7d427 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonMapReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonMapReactive.java @@ -24,7 +24,7 @@ import org.reactivestreams.Publisher; import org.redisson.RedissonMap; import org.redisson.api.RMapReactive; import org.redisson.client.codec.Codec; -import org.redisson.client.codec.ScanCodec; +import org.redisson.client.codec.MapScanCodec; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.decoder.MapScanResult; import org.redisson.client.protocol.decoder.ScanObjectEntry; @@ -44,18 +44,18 @@ import reactor.rx.Streams; * @param key * @param value */ -public class RedissonMapReactive extends RedissonExpirableReactive implements RMapReactive { +public class RedissonMapReactive extends RedissonExpirableReactive implements RMapReactive, MapReactive { private final RedissonMap instance; public RedissonMapReactive(CommandReactiveExecutor commandExecutor, String name) { super(commandExecutor, name); - instance = new RedissonMap(codec, commandExecutor, name); + instance = new RedissonMap(null, codec, commandExecutor, name); } public RedissonMapReactive(Codec codec, CommandReactiveExecutor commandExecutor, String name) { super(codec, commandExecutor, name); - instance = new RedissonMap(codec, commandExecutor, name); + instance = new RedissonMap(null, codec, commandExecutor, name); } @Override @@ -129,8 +129,8 @@ public class RedissonMapReactive extends RedissonExpirableReactive impleme return reactive(instance.fastRemoveAsync(keys)); } - Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { - return commandExecutor.readReactive(client, getName(), new ScanCodec(codec), RedisCommands.HSCAN, getName(), startPos); + public Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { + return commandExecutor.readReactive(client, getName(), new MapScanCodec(codec), RedisCommands.HSCAN, getName(), startPos); } @Override diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonMapReactiveIterator.java b/redisson/src/main/java/org/redisson/reactive/RedissonMapReactiveIterator.java index 3cb7677ba..36c682513 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonMapReactiveIterator.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonMapReactiveIterator.java @@ -33,9 +33,9 @@ import reactor.rx.subscription.ReactiveSubscription; public class RedissonMapReactiveIterator { - private final RedissonMapReactive map; + private final MapReactive map; - public RedissonMapReactiveIterator(RedissonMapReactive map) { + public RedissonMapReactiveIterator(MapReactive map) { this.map = map; } diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonScoredSortedSetReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonScoredSortedSetReactive.java index 927b4056d..7f87a584d 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonScoredSortedSetReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonScoredSortedSetReactive.java @@ -23,6 +23,7 @@ import java.util.Collections; import org.reactivestreams.Publisher; import org.redisson.api.RScoredSortedSetReactive; import org.redisson.client.codec.Codec; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.RedisCommand; import org.redisson.client.protocol.RedisCommand.ValueType; @@ -30,6 +31,7 @@ import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.ScoredEntry; import org.redisson.client.protocol.convertor.BooleanReplayConvertor; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandReactiveExecutor; public class RedissonScoredSortedSetReactive extends RedissonExpirableReactive implements RScoredSortedSetReactive { @@ -122,15 +124,15 @@ public class RedissonScoredSortedSetReactive extends RedissonExpirableReactiv return commandExecutor.readReactive(getName(), codec, RedisCommands.ZRANK, getName(), o); } - private Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { - return commandExecutor.readReactive(client, getName(), codec, RedisCommands.ZSCAN, getName(), startPos); + private Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { + return commandExecutor.readReactive(client, getName(), new ScanCodec(codec), RedisCommands.ZSCAN, getName(), startPos); } @Override public Publisher iterator() { return new SetReactiveIterator() { @Override - protected Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos) { + protected Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos) { return RedissonScoredSortedSetReactive.this.scanIteratorReactive(client, nextIterPos); } }; diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonSetCacheReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonSetCacheReactive.java index cac153850..dcd390e1d 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonSetCacheReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonSetCacheReactive.java @@ -24,13 +24,14 @@ import java.util.List; import java.util.concurrent.TimeUnit; import org.reactivestreams.Publisher; -import org.redisson.EvictionScheduler; import org.redisson.RedissonSetCache; import org.redisson.api.RSetCacheReactive; import org.redisson.client.codec.Codec; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandReactiveExecutor; +import org.redisson.eviction.EvictionScheduler; /** *

Set-based cache with ability to set TTL for each entry via @@ -43,7 +44,7 @@ import org.redisson.command.CommandReactiveExecutor; * Thus values are checked for TTL expiration during any value read operation. * If entry expired then it doesn't returns and clean task runs hronous. * Clean task deletes removes 100 expired entries at once. - * In addition there is {@link org.redisson.EvictionScheduler}. This scheduler + * In addition there is {@link org.redisson.eviction.EvictionScheduler}. This scheduler * deletes expired entries in time interval between 5 seconds to 2 hours.

* *

If eviction is not required then it's better to use {@link org.redisson.api.RSet}.

@@ -76,15 +77,15 @@ public class RedissonSetCacheReactive extends RedissonExpirableReactive imple return reactive(instance.containsAsync(o)); } - Publisher> scanIterator(InetSocketAddress client, long startPos) { - return reactive(instance.scanIteratorAsync(client, startPos)); + Publisher> scanIterator(InetSocketAddress client, long startPos) { + return reactive(instance.scanIteratorAsync(getName(), client, startPos)); } @Override public Publisher iterator() { return new SetReactiveIterator() { @Override - protected Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos) { + protected Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos) { return RedissonSetCacheReactive.this.scanIterator(client, nextIterPos); } }; diff --git a/redisson/src/main/java/org/redisson/reactive/RedissonSetReactive.java b/redisson/src/main/java/org/redisson/reactive/RedissonSetReactive.java index 0a9ea284d..fa5db8e9f 100644 --- a/redisson/src/main/java/org/redisson/reactive/RedissonSetReactive.java +++ b/redisson/src/main/java/org/redisson/reactive/RedissonSetReactive.java @@ -26,8 +26,10 @@ import org.reactivestreams.Publisher; import org.redisson.RedissonSet; import org.redisson.api.RSetReactive; import org.redisson.client.codec.Codec; +import org.redisson.client.codec.ScanCodec; import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; import org.redisson.command.CommandReactiveExecutor; /** @@ -66,8 +68,8 @@ public class RedissonSetReactive extends RedissonExpirableReactive implements return reactive(instance.containsAsync(o)); } - private Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { - return commandExecutor.readReactive(client, getName(), codec, RedisCommands.SSCAN, getName(), startPos); + private Publisher> scanIteratorReactive(InetSocketAddress client, long startPos) { + return commandExecutor.readReactive(client, getName(), new ScanCodec(codec), RedisCommands.SSCAN, getName(), startPos); } @Override @@ -156,7 +158,7 @@ public class RedissonSetReactive extends RedissonExpirableReactive implements public Publisher iterator() { return new SetReactiveIterator() { @Override - protected Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos) { + protected Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos) { return RedissonSetReactive.this.scanIteratorReactive(client, nextIterPos); } }; diff --git a/redisson/src/main/java/org/redisson/reactive/SetReactiveIterator.java b/redisson/src/main/java/org/redisson/reactive/SetReactiveIterator.java index 787866822..95adcd488 100644 --- a/redisson/src/main/java/org/redisson/reactive/SetReactiveIterator.java +++ b/redisson/src/main/java/org/redisson/reactive/SetReactiveIterator.java @@ -16,13 +16,16 @@ package org.redisson.reactive; import java.net.InetSocketAddress; +import java.util.ArrayList; import java.util.List; import org.reactivestreams.Publisher; import org.reactivestreams.Subscriber; import org.reactivestreams.Subscription; import org.redisson.client.protocol.decoder.ListScanResult; +import org.redisson.client.protocol.decoder.ScanObjectEntry; +import io.netty.buffer.ByteBuf; import reactor.rx.Stream; import reactor.rx.subscription.ReactiveSubscription; @@ -32,28 +35,27 @@ public abstract class SetReactiveIterator extends Stream { public void subscribe(final Subscriber t) { t.onSubscribe(new ReactiveSubscription(this, t) { - private List firstValues; + private List firstValues; + private List lastValues; private long nextIterPos; private InetSocketAddress client; - private long currentIndex; + private boolean finished; @Override protected void onRequest(long n) { - currentIndex = n; - nextValues(); } - private void handle(List vals) { - for (V val : vals) { - onNext(val); + private void handle(List vals) { + for (ScanObjectEntry val : vals) { + onNext((V)val.getObj()); } } protected void nextValues() { final ReactiveSubscription m = this; - scanIteratorReactive(client, nextIterPos).subscribe(new Subscriber>() { + scanIteratorReactive(client, nextIterPos).subscribe(new Subscriber>() { @Override public void onSubscribe(Subscription s) { @@ -61,32 +63,68 @@ public abstract class SetReactiveIterator extends Stream { } @Override - public void onNext(ListScanResult res) { - client = res.getRedisClient(); - - long prevIterPos = nextIterPos; - if (nextIterPos == 0 && firstValues == null) { - firstValues = res.getValues(); - } else if (res.getValues().equals(firstValues)) { - m.onComplete(); - currentIndex = 0; + public void onNext(ListScanResult res) { + if (finished) { + free(firstValues); + free(lastValues); + + client = null; + firstValues = null; + lastValues = null; + nextIterPos = 0; return; } - nextIterPos = res.getPos(); - if (prevIterPos == nextIterPos) { - nextIterPos = -1; + long prevIterPos = nextIterPos; + if (lastValues != null) { + free(lastValues); + } + + lastValues = convert(res.getValues()); + client = res.getRedisClient(); + + if (nextIterPos == 0 && firstValues == null) { + firstValues = lastValues; + lastValues = null; + if (firstValues.isEmpty()) { + client = null; + firstValues = null; + nextIterPos = 0; + prevIterPos = -1; + } + } else { + if (firstValues.isEmpty()) { + firstValues = lastValues; + lastValues = null; + if (firstValues.isEmpty()) { + if (res.getPos() == 0) { + finished = true; + m.onComplete(); + return; + } + } + } else if (lastValues.removeAll(firstValues)) { + free(firstValues); + free(lastValues); + + client = null; + firstValues = null; + lastValues = null; + nextIterPos = 0; + prevIterPos = -1; + finished = true; + m.onComplete(); + return; + } } handle(res.getValues()); - if (currentIndex == 0) { - return; - } - - if (nextIterPos == -1) { + nextIterPos = res.getPos(); + + if (prevIterPos == nextIterPos) { + finished = true; m.onComplete(); - currentIndex = 0; } } @@ -97,7 +135,7 @@ public abstract class SetReactiveIterator extends Stream { @Override public void onComplete() { - if (currentIndex == 0) { + if (finished) { return; } nextValues(); @@ -106,7 +144,24 @@ public abstract class SetReactiveIterator extends Stream { } }); } + + private void free(List list) { + if (list == null) { + return; + } + for (ByteBuf byteBuf : list) { + byteBuf.release(); + } + } + + private List convert(List list) { + List result = new ArrayList(list.size()); + for (ScanObjectEntry entry : list) { + result.add(entry.getBuf()); + } + return result; + } - protected abstract Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos); + protected abstract Publisher> scanIteratorReactive(InetSocketAddress client, long nextIterPos); } diff --git a/redisson/src/main/java/org/redisson/remote/RemoteServiceKey.java b/redisson/src/main/java/org/redisson/remote/RemoteServiceKey.java index 9b1c4b83f..ffe845db2 100644 --- a/redisson/src/main/java/org/redisson/remote/RemoteServiceKey.java +++ b/redisson/src/main/java/org/redisson/remote/RemoteServiceKey.java @@ -15,6 +15,11 @@ */ package org.redisson.remote; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + /** * * @author Nikita Koksharov @@ -24,17 +29,23 @@ public class RemoteServiceKey { private final Class serviceInterface; private final String methodName; - - public RemoteServiceKey(Class serviceInterface, String methodName) { + private final List signatures; + + public RemoteServiceKey(Class serviceInterface, String method, List signatures) { super(); this.serviceInterface = serviceInterface; - this.methodName = methodName; + this.methodName = method; + this.signatures = Collections.unmodifiableList(signatures); } public String getMethodName() { return methodName; } - + + public List getSignatures() { + return signatures; + } + public Class getServiceInterface() { return serviceInterface; } @@ -44,6 +55,7 @@ public class RemoteServiceKey { final int prime = 31; int result = 1; result = prime * result + ((methodName == null) ? 0 : methodName.hashCode()); + result = prime * result + ((signatures == null) ? 0 : signatures.hashCode()); result = prime * result + ((serviceInterface == null) ? 0 : serviceInterface.getName().hashCode()); return result; } @@ -60,9 +72,11 @@ public class RemoteServiceKey { if (methodName == null) { if (other.methodName != null) return false; - } else if (!methodName.equals(other.methodName)) + } else if (!methodName.equals(other.methodName)) { + return false; + } else if (!signatures.equals(other.signatures)) { return false; - if (serviceInterface == null) { + } if (serviceInterface == null) { if (other.serviceInterface != null) return false; } else if (!serviceInterface.equals(other.serviceInterface)) diff --git a/redisson/src/main/java/org/redisson/remote/RemoteServiceRequest.java b/redisson/src/main/java/org/redisson/remote/RemoteServiceRequest.java index 2ddfbb387..2c70d6223 100644 --- a/redisson/src/main/java/org/redisson/remote/RemoteServiceRequest.java +++ b/redisson/src/main/java/org/redisson/remote/RemoteServiceRequest.java @@ -17,6 +17,7 @@ package org.redisson.remote; import java.io.Serializable; import java.util.Arrays; +import java.util.List; import org.redisson.api.RemoteInvocationOptions; @@ -31,6 +32,7 @@ public class RemoteServiceRequest implements Serializable { private String requestId; private String methodName; + private List signatures; private Object[] args; private RemoteInvocationOptions options; private long date; @@ -43,10 +45,11 @@ public class RemoteServiceRequest implements Serializable { this.requestId = requestId; } - public RemoteServiceRequest(String requestId, String methodName, Object[] args, RemoteInvocationOptions options, long date) { + public RemoteServiceRequest(String requestId, String methodName, List signatures, Object[] args, RemoteInvocationOptions options, long date) { super(); this.requestId = requestId; this.methodName = methodName; + this.signatures = signatures; this.args = args; this.options = options; this.date = date; @@ -64,6 +67,10 @@ public class RemoteServiceRequest implements Serializable { return args; } + public List getSignatures() { + return signatures; + } + public RemoteInvocationOptions getOptions() { return options; } @@ -74,7 +81,8 @@ public class RemoteServiceRequest implements Serializable { @Override public String toString() { - return "RemoteServiceRequest [requestId=" + requestId + ", methodName=" + methodName + ", args=" + return "RemoteServiceRequest [requestId=" + requestId + ", methodName=" + methodName + ", signatures=[" + + Arrays.toString(signatures.toArray()) + "], args=" + Arrays.toString(args) + ", options=" + options + ", date=" + date + "]"; } diff --git a/redisson/src/main/java/org/redisson/spring/cache/RedissonCache.java b/redisson/src/main/java/org/redisson/spring/cache/RedissonCache.java index 40025dd99..fc44f108d 100644 --- a/redisson/src/main/java/org/redisson/spring/cache/RedissonCache.java +++ b/redisson/src/main/java/org/redisson/spring/cache/RedissonCache.java @@ -15,7 +15,6 @@ */ package org.redisson.spring.cache; -import java.io.IOException; import java.lang.reflect.Constructor; import java.util.concurrent.Callable; import java.util.concurrent.TimeUnit; @@ -24,7 +23,6 @@ import org.redisson.api.RLock; import org.redisson.api.RMap; import org.redisson.api.RMapCache; import org.redisson.api.RedissonClient; -import org.redisson.misc.Hash; import org.springframework.cache.Cache; import org.springframework.cache.support.SimpleValueWrapper; @@ -126,8 +124,7 @@ public class RedissonCache implements Cache { public T get(Object key, Callable valueLoader) { Object value = map.get(key); if (value == null) { - String lockName = getLockName(key); - RLock lock = redisson.getLock(lockName); + RLock lock = map.getLock(key); lock.lock(); try { value = map.get(key); @@ -154,15 +151,6 @@ public class RedissonCache implements Cache { return (T) fromStoreValue(value); } - private String getLockName(Object key) { - try { - byte[] keyState = redisson.getConfig().getCodec().getMapKeyEncoder().encode(key); - return "{" + map.getName() + "}:" + Hash.hashToBase64(keyState) + ":key"; - } catch (IOException e) { - throw new IllegalStateException(e); - } - } - protected Object fromStoreValue(Object storeValue) { if (storeValue == NullValue.INSTANCE) { return null; diff --git a/redisson/src/main/java/org/redisson/spring/cache/RedissonSpringCacheManager.java b/redisson/src/main/java/org/redisson/spring/cache/RedissonSpringCacheManager.java index 99fd12958..1cfea6077 100644 --- a/redisson/src/main/java/org/redisson/spring/cache/RedissonSpringCacheManager.java +++ b/redisson/src/main/java/org/redisson/spring/cache/RedissonSpringCacheManager.java @@ -52,7 +52,13 @@ public class RedissonSpringCacheManager implements CacheManager, ResourceLoaderA private String configLocation; - public RedissonSpringCacheManager() { + /** + * Creates CacheManager supplied by Redisson instance + * + * @param redisson object + */ + public RedissonSpringCacheManager(RedissonClient redisson) { + this(redisson, (String)null, null); } /** diff --git a/redisson/src/main/java/org/redisson/spring/session/RedissonSessionRepository.java b/redisson/src/main/java/org/redisson/spring/session/RedissonSessionRepository.java new file mode 100644 index 000000000..372e78bd5 --- /dev/null +++ b/redisson/src/main/java/org/redisson/spring/session/RedissonSessionRepository.java @@ -0,0 +1,375 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.spring.session; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.concurrent.TimeUnit; + +import org.redisson.api.RMap; +import org.redisson.api.RPatternTopic; +import org.redisson.api.RSet; +import org.redisson.api.RTopic; +import org.redisson.api.RedissonClient; +import org.redisson.api.listener.PatternMessageListener; +import org.redisson.client.codec.StringCodec; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.springframework.context.ApplicationEventPublisher; +import org.springframework.expression.Expression; +import org.springframework.expression.spel.standard.SpelExpressionParser; +import org.springframework.session.ExpiringSession; +import org.springframework.session.FindByIndexNameSessionRepository; +import org.springframework.session.MapSession; +import org.springframework.session.Session; +import org.springframework.session.events.SessionCreatedEvent; +import org.springframework.session.events.SessionDeletedEvent; +import org.springframework.session.events.SessionExpiredEvent; + +/** + * + * @author Nikita Koksharov + * + */ +public class RedissonSessionRepository implements FindByIndexNameSessionRepository, + PatternMessageListener { + + final class RedissonSession implements ExpiringSession { + + private String principalName; + private final MapSession delegate; + private RMap map; + + public RedissonSession() { + this.delegate = new MapSession(); + map = redisson.getMap("redisson_spring_session:" + delegate.getId()); + principalName = resolvePrincipal(delegate); + + Map newMap = new HashMap(3); + newMap.put("session:creationTime", delegate.getCreationTime()); + newMap.put("session:lastAccessedTime", delegate.getLastAccessedTime()); + newMap.put("session:maxInactiveInterval", delegate.getMaxInactiveIntervalInSeconds()); + map.putAll(newMap); + + updateExpiration(); + + String channelName = getEventsChannelName(delegate.getId()); + RTopic topic = redisson.getTopic(channelName, StringCodec.INSTANCE); + topic.publish(delegate.getId()); + } + + private void updateExpiration() { + if (delegate.getMaxInactiveIntervalInSeconds() >= 0) { + map.expire(delegate.getMaxInactiveIntervalInSeconds(), TimeUnit.SECONDS); + } + } + + public RedissonSession(String sessionId) { + this.delegate = new MapSession(sessionId); + map = redisson.getMap("redisson_spring_session:" + sessionId); + principalName = resolvePrincipal(delegate); + } + + public void delete() { + map.delete(); + } + + public boolean load() { + Set> entrySet = map.readAllEntrySet(); + for (Entry entry : entrySet) { + if ("session:creationTime".equals(entry.getKey())) { + delegate.setCreationTime((Long) entry.getValue()); + } else if ("session:lastAccessedTime".equals(entry.getKey())) { + delegate.setLastAccessedTime((Long) entry.getValue()); + } else if ("session:maxInactiveInterval".equals(entry.getKey())) { + delegate.setMaxInactiveIntervalInSeconds((Integer) entry.getValue()); + } else { + delegate.setAttribute(entry.getKey(), entry.getValue()); + } + } + return !entrySet.isEmpty(); + } + + @Override + public String getId() { + return delegate.getId(); + } + + @Override + public T getAttribute(String attributeName) { + return delegate.getAttribute(attributeName); + } + + @Override + public Set getAttributeNames() { + return delegate.getAttributeNames(); + } + + @Override + public void setAttribute(String attributeName, Object attributeValue) { + if (attributeValue == null) { + removeAttribute(attributeName); + return; + } + + delegate.setAttribute(attributeName, attributeValue); + + if (map != null) { + map.fastPut(attributeName, attributeValue); + + String principalSessionAttr = getSessionAttrNameKey(PRINCIPAL_NAME_INDEX_NAME); + String securityPrincipalSessionAttr = getSessionAttrNameKey(SPRING_SECURITY_CONTEXT); + + if (attributeName.equals(principalSessionAttr) + || attributeName.equals(securityPrincipalSessionAttr)) { + // remove old + if (principalName != null) { + RSet set = getPrincipalSet(principalName); + set.remove(getId()); + } + + principalName = resolvePrincipal(this); + if (principalName != null) { + RSet set = getPrincipalSet(principalName); + set.add(getId()); + } + } + } + } + + public void clearPrincipal() { + principalName = resolvePrincipal(this); + if (principalName != null) { + RSet set = getPrincipalSet(principalName); + set.remove(getId()); + } + } + + @Override + public void removeAttribute(String attributeName) { + delegate.removeAttribute(attributeName); + + if (map != null) { + map.fastRemove(attributeName); + } + } + + @Override + public long getCreationTime() { + return delegate.getCreationTime(); + } + + @Override + public void setLastAccessedTime(long lastAccessedTime) { + delegate.setLastAccessedTime(lastAccessedTime); + + if (map != null) { + map.fastPut("session:lastAccessedTime", lastAccessedTime); + updateExpiration(); + } + } + + @Override + public long getLastAccessedTime() { + return delegate.getLastAccessedTime(); + } + + @Override + public void setMaxInactiveIntervalInSeconds(int interval) { + delegate.setMaxInactiveIntervalInSeconds(interval); + + if (map != null) { + map.fastPut("session:maxInactiveInterval", interval); + updateExpiration(); + } + } + + @Override + public int getMaxInactiveIntervalInSeconds() { + return delegate.getMaxInactiveIntervalInSeconds(); + } + + @Override + public boolean isExpired() { + return delegate.isExpired(); + } + + } + + private static final Logger log = LoggerFactory.getLogger(RedissonSessionRepository.class); + + private static final String SPRING_SECURITY_CONTEXT = "SPRING_SECURITY_CONTEXT"; + + private static final SpelExpressionParser SPEL_PARSER = new SpelExpressionParser(); + + private RedissonClient redisson; + private ApplicationEventPublisher eventPublisher; + private RPatternTopic deletedTopic; + private RPatternTopic expiredTopic; + private RPatternTopic createdTopic; + + private String keyPrefix = "spring:session:"; + private Integer defaultMaxInactiveInterval; + + public RedissonSessionRepository(RedissonClient redissonClient, ApplicationEventPublisher eventPublisher) { + this.redisson = redissonClient; + this.eventPublisher = eventPublisher; + + deletedTopic = redisson.getPatternTopic("__keyevent@*:del", StringCodec.INSTANCE); + deletedTopic.addListener(this); + expiredTopic = redisson.getPatternTopic("__keyevent@*:expired", StringCodec.INSTANCE); + expiredTopic.addListener(this); + createdTopic = redisson.getPatternTopic(getEventsChannelPrefix() + "*", StringCodec.INSTANCE); + createdTopic.addListener(this); + } + + @Override + public void onMessage(String pattern, String channel, String body) { + if (createdTopic.getPatternNames().contains(pattern)) { + RedissonSession session = getSession(body); + if (session != null) { + publishEvent(new SessionCreatedEvent(this, session)); + } + } else if (deletedTopic.getPatternNames().contains(pattern)) { + String id = body.split(":")[1]; + RedissonSession session = new RedissonSession(id); + if (session.load()) { + session.clearPrincipal(); + publishEvent(new SessionDeletedEvent(this, session)); + } else { + publishEvent(new SessionDeletedEvent(this, id)); + } + } else if (expiredTopic.getPatternNames().contains(pattern)) { + String id = body.split(":")[1]; + RedissonSession session = new RedissonSession(id); + if (session.load()) { + session.clearPrincipal(); + publishEvent(new SessionExpiredEvent(this, session)); + } else { + publishEvent(new SessionExpiredEvent(this, id)); + } + } + } + + private void publishEvent(Object event) { + try { + eventPublisher.publishEvent(event); + } catch (Exception e) { + log.error(e.getMessage(), e); + } + } + + public void setDefaultMaxInactiveInterval(int defaultMaxInactiveInterval) { + this.defaultMaxInactiveInterval = defaultMaxInactiveInterval; + } + + @Override + public RedissonSession createSession() { + RedissonSession session = new RedissonSession(); + if (defaultMaxInactiveInterval != null) { + session.setMaxInactiveIntervalInSeconds(defaultMaxInactiveInterval); + } + return session; + } + + @Override + public void save(RedissonSession session) { + // session changes are stored in real-time + } + + @Override + public RedissonSession getSession(String id) { + RedissonSession session = new RedissonSession(id); + if (!session.load() || session.isExpired()) { + return null; + } + return session; + } + + @Override + public void delete(String id) { + RedissonSession session = getSession(id); + if (session == null) { + return; + } + + session.clearPrincipal(); + session.delete(); + } + + public void setKeyPrefix(String keyPrefix) { + this.keyPrefix = keyPrefix; + } + + String resolvePrincipal(Session session) { + String principalName = session.getAttribute(PRINCIPAL_NAME_INDEX_NAME); + if (principalName != null) { + return principalName; + } + + Object auth = session.getAttribute(SPRING_SECURITY_CONTEXT); + if (auth == null) { + return null; + } + + Expression expression = SPEL_PARSER.parseExpression("authentication?.name"); + return expression.getValue(auth, String.class); + } + + String getEventsChannelName(String sessionId) { + return getEventsChannelPrefix() + sessionId; + } + + String getEventsChannelPrefix() { + return keyPrefix + "created:event:"; + } + + String getPrincipalKey(String principalName) { + return keyPrefix + "index:" + FindByIndexNameSessionRepository.PRINCIPAL_NAME_INDEX_NAME + ":" + principalName; + } + + String getSessionAttrNameKey(String name) { + return "session-attr:" + name; + } + + @Override + public Map findByIndexNameAndIndexValue(String indexName, String indexValue) { + if (!PRINCIPAL_NAME_INDEX_NAME.equals(indexName)) { + return Collections.emptyMap(); + } + + RSet set = getPrincipalSet(indexValue); + + Set sessionIds = set.readAll(); + Map result = new HashMap(); + for (String id : sessionIds) { + RedissonSession session = getSession(id); + if (session != null) { + result.put(id, session); + } + } + return result; + } + + private RSet getPrincipalSet(String indexValue) { + String principalKey = getPrincipalKey(indexValue); + return redisson.getSet(principalKey); + } + +} diff --git a/redisson/src/main/java/org/redisson/spring/session/config/EnableRedissonHttpSession.java b/redisson/src/main/java/org/redisson/spring/session/config/EnableRedissonHttpSession.java new file mode 100644 index 000000000..e41594d22 --- /dev/null +++ b/redisson/src/main/java/org/redisson/spring/session/config/EnableRedissonHttpSession.java @@ -0,0 +1,61 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.spring.session.config; + +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; + +import org.springframework.context.annotation.Configuration; +import org.springframework.context.annotation.Import; +import org.springframework.session.MapSession; + +/** + * Enables Redisson's Spring Session implementation backed by Redis and + * exposes SessionRepositoryFilter as a bean named "springSessionRepositoryFilter". + *

+ * Redisson instance should be registered as bean in application context. + * Usage example: + *

+ * 
+ * {@literal @Configuration}
+ * {@literal EnableRedissonHttpSession}
+ * public class RedissonHttpSessionConfig {
+ *    
+ *    {@literal @Bean}
+ *    public RedissonClient redisson() {
+ *        return Redisson.create();
+ *    }
+ *    
+ * }
+ * 
+ * 
+ * + * @author Nikita Koksharov + * + */ +@Retention(RetentionPolicy.RUNTIME) +@Target(ElementType.TYPE) +@Import(RedissonHttpSessionConfiguration.class) +@Configuration +public @interface EnableRedissonHttpSession { + + int maxInactiveIntervalInSeconds() default MapSession.DEFAULT_MAX_INACTIVE_INTERVAL_SECONDS; + + String keyPrefix() default ""; + +} diff --git a/redisson/src/main/java/org/redisson/spring/session/config/RedissonHttpSessionConfiguration.java b/redisson/src/main/java/org/redisson/spring/session/config/RedissonHttpSessionConfiguration.java new file mode 100644 index 000000000..88c5a8627 --- /dev/null +++ b/redisson/src/main/java/org/redisson/spring/session/config/RedissonHttpSessionConfiguration.java @@ -0,0 +1,74 @@ +/** + * Copyright 2016 Nikita Koksharov + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.redisson.spring.session.config; + +import java.util.Map; + +import org.redisson.api.RedissonClient; +import org.redisson.spring.session.RedissonSessionRepository; +import org.springframework.context.ApplicationEventPublisher; +import org.springframework.context.annotation.Bean; +import org.springframework.context.annotation.Configuration; +import org.springframework.context.annotation.ImportAware; +import org.springframework.core.annotation.AnnotationAttributes; +import org.springframework.core.type.AnnotationMetadata; +import org.springframework.session.config.annotation.web.http.SpringHttpSessionConfiguration; +import org.springframework.util.StringUtils; + +/** + * Exposes the SessionRepositoryFilter as the bean + * named "springSessionRepositoryFilter". + *

+ * Redisson instance should be registered as bean + * in application context. + * + * @author Nikita Koksharov + * + */ +@Configuration +public class RedissonHttpSessionConfiguration extends SpringHttpSessionConfiguration implements ImportAware { + + private Integer maxInactiveIntervalInSeconds; + private String keyPrefix; + + @Bean + public RedissonSessionRepository sessionRepository( + RedissonClient redissonClient, ApplicationEventPublisher eventPublisher) { + RedissonSessionRepository repository = new RedissonSessionRepository(redissonClient, eventPublisher); + if (StringUtils.hasText(keyPrefix)) { + repository.setKeyPrefix(keyPrefix); + } + repository.setDefaultMaxInactiveInterval(maxInactiveIntervalInSeconds); + return repository; + } + + public void setMaxInactiveIntervalInSeconds(Integer maxInactiveIntervalInSeconds) { + this.maxInactiveIntervalInSeconds = maxInactiveIntervalInSeconds; + } + + public void setKeyPrefix(String keyPrefix) { + this.keyPrefix = keyPrefix; + } + + @Override + public void setImportMetadata(AnnotationMetadata importMetadata) { + Map map = importMetadata.getAnnotationAttributes(EnableRedissonHttpSession.class.getName()); + AnnotationAttributes attrs = AnnotationAttributes.fromMap(map); + keyPrefix = attrs.getString("keyPrefix"); + maxInactiveIntervalInSeconds = attrs.getNumber("maxInactiveIntervalInSeconds"); + } + +} diff --git a/redisson/src/main/resources/META-INF/services/javax.cache.spi.CachingProvider b/redisson/src/main/resources/META-INF/services/javax.cache.spi.CachingProvider new file mode 100644 index 000000000..952eff76b --- /dev/null +++ b/redisson/src/main/resources/META-INF/services/javax.cache.spi.CachingProvider @@ -0,0 +1 @@ +org.redisson.jcache.JCachingProvider \ No newline at end of file diff --git a/redisson/src/test/java/org/redisson/BaseTest.java b/redisson/src/test/java/org/redisson/BaseTest.java index f54265cdc..2caa0e612 100644 --- a/redisson/src/test/java/org/redisson/BaseTest.java +++ b/redisson/src/test/java/org/redisson/BaseTest.java @@ -60,9 +60,7 @@ public abstract class BaseTest { // config.useSentinelServers().setMasterName("mymaster").addSentinelAddress("127.0.0.1:26379", "127.0.0.1:26389"); // config.useClusterServers().addNodeAddress("127.0.0.1:7004", "127.0.0.1:7001", "127.0.0.1:7000"); config.useSingleServer() - .setAddress(RedisRunner.getDefaultRedisServerBindAddressAndPort()) - .setConnectTimeout(1000000) - .setTimeout(1000000); + .setAddress(RedisRunner.getDefaultRedisServerBindAddressAndPort()); // .setPassword("mypass1"); // config.useMasterSlaveConnection() // .setMasterAddress("127.0.0.1:6379") diff --git a/redisson/src/test/java/org/redisson/RedisRunner.java b/redisson/src/test/java/org/redisson/RedisRunner.java index 9d0acd62b..c7bfd0dde 100644 --- a/redisson/src/test/java/org/redisson/RedisRunner.java +++ b/redisson/src/test/java/org/redisson/RedisRunner.java @@ -12,11 +12,13 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.LinkedHashMap; import java.util.List; +import java.util.Map; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.stream.Collectors; import org.redisson.client.RedisClient; import org.redisson.client.RedisConnection; +import org.redisson.client.protocol.RedisCommands; import org.redisson.client.protocol.RedisStrictCommand; import org.redisson.client.protocol.convertor.VoidReplayConvertor; @@ -87,7 +89,7 @@ public class RedisRunner { SLOWLOG_LOG_SLOWER_THAN, SLOWLOG_MAX_LEN, LATENCY_MONITOR_THRESHOLD, - NOFITY_KEYSPACE_EVENTS, + NOTIFY_KEYSPACE_EVENTS, HASH_MAX_ZIPLIST_ENTRIES, HASH_MAX_ZIPLIST_VALUE, LIST_MAX_ZIPLIST_ENTRIES, @@ -172,7 +174,7 @@ public class RedisRunner { } private final LinkedHashMap options = new LinkedHashMap<>(); - private static RedisRunner.RedisProcess defaultRedisInstance; + protected static RedisRunner.RedisProcess defaultRedisInstance; private static int defaultRedisInstanceExitCode; private String defaultDir = Paths.get("").toString(); @@ -618,12 +620,16 @@ public class RedisRunner { return this; } - public RedisRunner notifyKeyspaceEvents(KEYSPACE_EVENTS_OPTIONS notifyKeyspaceEvents) { - String existing = this.options.getOrDefault(REDIS_OPTIONS.CLUSTER_CONFIG_FILE, ""); - addConfigOption(REDIS_OPTIONS.CLUSTER_CONFIG_FILE, - existing.contains(notifyKeyspaceEvents.toString()) + public RedisRunner notifyKeyspaceEvents(KEYSPACE_EVENTS_OPTIONS... notifyKeyspaceEvents) { + String existing = this.options.getOrDefault(REDIS_OPTIONS.NOTIFY_KEYSPACE_EVENTS, ""); + + String events = Arrays.stream(notifyKeyspaceEvents) + .collect(StringBuilder::new, StringBuilder::append, StringBuilder::append).toString(); + + addConfigOption(REDIS_OPTIONS.NOTIFY_KEYSPACE_EVENTS, + existing.contains(events) ? existing - : (existing + notifyKeyspaceEvents.toString())); + : (existing + events)); return this; } @@ -774,7 +780,10 @@ public class RedisRunner { public RedisVersion getRedisVersion() { if (redisVersion == null) { - redisVersion = new RedisVersion(createRedisClientInstance().serverInfo().get("redis_version")); + RedisConnection c = createRedisClientInstance().connect(); + Map serverMap = c.sync(RedisCommands.INFO_SERVER); + redisVersion = new RedisVersion(serverMap.get("redis_version")); + c.closeAsync(); } return redisVersion; } diff --git a/redisson/src/test/java/org/redisson/RedissonBinaryStreamTest.java b/redisson/src/test/java/org/redisson/RedissonBinaryStreamTest.java new file mode 100644 index 000000000..2d508a85a --- /dev/null +++ b/redisson/src/test/java/org/redisson/RedissonBinaryStreamTest.java @@ -0,0 +1,207 @@ +package org.redisson; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.ByteArrayInputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.math.BigInteger; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; +import java.util.Arrays; +import java.util.concurrent.ThreadLocalRandom; + +import org.junit.Test; +import org.redisson.api.RBinaryStream; + +public class RedissonBinaryStreamTest extends BaseTest { + + @Test + public void testEmptyRead() throws IOException { + RBinaryStream stream = redisson.getBinaryStream("test"); + assertThat(stream.getInputStream().read()).isEqualTo(-1); + } + + private void testLimit(int sizeInMBs, int chunkSize) throws IOException, NoSuchAlgorithmException { + RBinaryStream stream = redisson.getBinaryStream("test"); + + MessageDigest hash = MessageDigest.getInstance("SHA-1"); + hash.reset(); + + for (int i = 0; i < sizeInMBs; i++) { + byte[] bytes = new byte[chunkSize]; + ThreadLocalRandom.current().nextBytes(bytes); + hash.update(bytes); + stream.getOutputStream().write(bytes); + } + + String writtenDataHash = new BigInteger(1, hash.digest()).toString(16); + + hash.reset(); + InputStream s = stream.getInputStream(); + long readBytesTotal = 0; + while (true) { + byte[] bytes = new byte[ThreadLocalRandom.current().nextInt(0, chunkSize)]; + int readBytes = s.read(bytes); + if (readBytes == -1) { + break; + } + if (readBytes < bytes.length) { + bytes = Arrays.copyOf(bytes, readBytes); + } + hash.update(bytes); + readBytesTotal += readBytes; + } + String readDataHash = new BigInteger(1, hash.digest()).toString(16); + + assertThat(writtenDataHash).isEqualTo(readDataHash); + assertThat(readBytesTotal).isEqualTo(sizeInMBs*chunkSize); + assertThat(stream.size()).isEqualTo(sizeInMBs*chunkSize); + + assertThat(stream.size()).isEqualTo(sizeInMBs*chunkSize); + assertThat(redisson.getBucket("test").isExists()).isTrue(); + if (sizeInMBs*chunkSize <= 512*1024*1024) { + assertThat(redisson.getBucket("test:parts").isExists()).isFalse(); + assertThat(redisson.getBucket("test:1").isExists()).isFalse(); + } else { + int parts = (sizeInMBs*chunkSize)/(512*1024*1024); + for (int i = 1; i < parts-1; i++) { + assertThat(redisson.getBucket("test:" + i).isExists()).isTrue(); + } + } + } + + @Test + public void testSkip() throws IOException { + RBinaryStream t = redisson.getBinaryStream("test"); + t.set(new byte[] {1, 2, 3, 4, 5, 6}); + + InputStream is = t.getInputStream(); + is.skip(3); + byte[] b = new byte[6]; + is.read(b); + assertThat(b).isEqualTo(new byte[] {4, 5, 6, 0, 0, 0}); + } + + @Test + public void testLimit512by1024() throws IOException, NoSuchAlgorithmException { + testLimit(512, 1024*1024); + } + + @Test + public void testLimit1024By1000() throws IOException, NoSuchAlgorithmException { + testLimit(1024, 1000*1000); + } + + @Test + public void testSet100() { + RBinaryStream stream = redisson.getBinaryStream("test"); + + byte[] bytes = new byte[100*1024*1024]; + ThreadLocalRandom.current().nextBytes(bytes); + stream.set(bytes); + + assertThat(stream.size()).isEqualTo(bytes.length); + assertThat(stream.get()).isEqualTo(bytes); + } + + @Test + public void testSet1024() { + RBinaryStream stream = redisson.getBinaryStream("test"); + + byte[] bytes = new byte[1024*1024*1024]; + ThreadLocalRandom.current().nextBytes(bytes); + stream.set(bytes); + + assertThat(stream.size()).isEqualTo(bytes.length); + assertThat(redisson.getBucket("test:parts").isExists()).isTrue(); + assertThat(redisson.getBucket("test").size()).isEqualTo(512*1024*1024); + assertThat(redisson.getBucket("test:1").size()).isEqualTo(bytes.length - 512*1024*1024); + } + + @Test + public void testLimit1024By1024() throws IOException, NoSuchAlgorithmException { + testLimit(1024, 1024*1024); + } + + @Test + public void testRead() throws IOException { + RBinaryStream stream = redisson.getBinaryStream("test"); + byte[] value = {1, 2, 3, 4, 5, (byte)0xFF}; + stream.set(value); + + InputStream s = stream.getInputStream(); + int b = 0; + byte[] readValue = new byte[6]; + int i = 0; + while (true) { + b = s.read(); + if (b == -1) { + break; + } + readValue[i] = (byte) b; + i++; + } + + assertThat(readValue).isEqualTo(value); + } + + @Test + public void testReadArray() throws IOException { + RBinaryStream stream = redisson.getBinaryStream("test"); + byte[] value = {1, 2, 3, 4, 5, 6}; + stream.set(value); + + InputStream s = stream.getInputStream(); + byte[] b = new byte[6]; + assertThat(s.read(b)).isEqualTo(6); + assertThat(s.read(b)).isEqualTo(-1); + + assertThat(b).isEqualTo(value); + } + + @Test + public void testReadArrayWithOffset() throws IOException { + RBinaryStream stream = redisson.getBinaryStream("test"); + byte[] value = {1, 2, 3, 4, 5, 6}; + stream.set(value); + + InputStream s = stream.getInputStream(); + byte[] b = new byte[4]; + assertThat(s.read(b, 1, 3)).isEqualTo(3); + + byte[] valuesRead = {0, 1, 2, 3}; + assertThat(b).isEqualTo(valuesRead); + } + + @Test + public void testWriteArray() throws IOException { + RBinaryStream stream = redisson.getBinaryStream("test"); + OutputStream os = stream.getOutputStream(); + byte[] value = {1, 2, 3, 4, 5, 6}; + os.write(value); + + byte[] s = stream.get(); + assertThat(s).isEqualTo(value); + } + + @Test + public void testWriteArrayWithOffset() throws IOException { + RBinaryStream stream = redisson.getBinaryStream("test"); + OutputStream os = stream.getOutputStream(); + + byte[] value = {1, 2, 3, 4, 5, 6}; + os.write(value, 0, 3); + byte[] s = stream.get(); + + assertThat(s).isEqualTo(new byte[] {1, 2, 3}); + + os.write(value, 3, 3); + s = stream.get(); + + assertThat(s).isEqualTo(value); + } + + +} diff --git a/redisson/src/test/java/org/redisson/RedissonBlockingDequeTest.java b/redisson/src/test/java/org/redisson/RedissonBlockingDequeTest.java index 7770563a2..0b12a1311 100644 --- a/redisson/src/test/java/org/redisson/RedissonBlockingDequeTest.java +++ b/redisson/src/test/java/org/redisson/RedissonBlockingDequeTest.java @@ -11,6 +11,13 @@ import org.redisson.api.RBlockingDeque; public class RedissonBlockingDequeTest extends BaseTest { + @Test(timeout = 3000) + public void testShortPoll() throws InterruptedException { + RBlockingDeque queue = redisson.getBlockingDeque("queue:pollany"); + queue.pollLastAsync(500, TimeUnit.MILLISECONDS); + queue.pollFirstAsync(10, TimeUnit.MICROSECONDS); + } + @Test public void testPollLastFromAny() throws InterruptedException { final RBlockingDeque queue1 = redisson.getBlockingDeque("deque:pollany"); diff --git a/redisson/src/test/java/org/redisson/RedissonBlockingFairQueueTest.java b/redisson/src/test/java/org/redisson/RedissonBlockingFairQueueTest.java new file mode 100644 index 000000000..ff07e03e7 --- /dev/null +++ b/redisson/src/test/java/org/redisson/RedissonBlockingFairQueueTest.java @@ -0,0 +1,226 @@ +package org.redisson; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; + +import org.junit.Test; +import org.redisson.api.RBlockingFairQueue; +import org.redisson.api.RBlockingQueue; +import org.redisson.api.RedissonClient; + +public class RedissonBlockingFairQueueTest extends BaseTest { + + @Test + public void testTimeout() throws InterruptedException { + int size = 1000; + CountDownLatch latch = new CountDownLatch(size); + AtomicInteger t1Counter = new AtomicInteger(); + AtomicInteger t2Counter = new AtomicInteger(); + AtomicInteger t3Counter = new AtomicInteger(); + + RedissonClient redisson1 = createInstance(); + RBlockingFairQueue queue1 = redisson1.getBlockingFairQueue("test"); + Thread t1 = new Thread("test-thread1") { + public void run() { + while (true) { + try { + String a = queue1.poll(5, TimeUnit.SECONDS); + if (latch.getCount() == 0) { + break; + } + if (a == null) { + continue; + } + latch.countDown(); + t1Counter.incrementAndGet(); + } catch (InterruptedException e) { + } + } + }; + }; + + RedissonClient redisson2 = createInstance(); + RBlockingFairQueue queue2 = redisson2.getBlockingFairQueue("test"); + Thread t2 = new Thread("test-thread2") { + public void run() { + try { + String a = queue2.poll(2, TimeUnit.SECONDS); + if (a != null) { + latch.countDown(); + t2Counter.incrementAndGet(); + } + } catch (InterruptedException e) { + } + }; + }; + + RedissonClient redisson3 = createInstance(); + RBlockingFairQueue queue3 = redisson3.getBlockingFairQueue("test"); + Thread t3 = new Thread("test-thread3") { + public void run() { + while (true) { + try { + String a = queue3.poll(5, TimeUnit.SECONDS); + if (latch.getCount() == 0) { + break; + } + if (a == null) { + continue; + } + latch.countDown(); + t3Counter.incrementAndGet(); + } catch (InterruptedException e) { + } + } + }; + }; + + t1.start(); + t1.join(500); + t2.start(); + t2.join(500); + t3.start(); + t3.join(500); + + RBlockingQueue queue = redisson.getBlockingFairQueue("test"); + assertThat(redisson.getList("{" + queue.getName() + "}:list").size()).isEqualTo(3); + + for (int i = 0; i < size; i++) { + queue.add("" + i); + } + + t1.join(); + t2.join(); + t3.join(); + + assertThat(latch.await(50, TimeUnit.SECONDS)).isTrue(); + + assertThat(t1Counter.get()).isBetween(499, 500); + assertThat(t2Counter.get()).isEqualTo(1); + assertThat(t3Counter.get()).isBetween(499, 500); + + assertThat(redisson.getList("{" + queue.getName() + "}:list").size()).isEqualTo(2); + } + + @Test + public void testFairness() throws InterruptedException { + int size = 1000; + + CountDownLatch latch = new CountDownLatch(size); + AtomicInteger t1Counter = new AtomicInteger(); + AtomicInteger t2Counter = new AtomicInteger(); + AtomicInteger t3Counter = new AtomicInteger(); + AtomicInteger t4Counter = new AtomicInteger(); + + RedissonClient redisson1 = createInstance(); + RBlockingFairQueue queue1 = redisson1.getBlockingFairQueue("test"); + Thread t1 = new Thread("test-thread1") { + public void run() { + while (true) { + try { + String a = queue1.poll(1, TimeUnit.SECONDS); + if (a == null) { + break; + } + latch.countDown(); + t1Counter.incrementAndGet(); + } catch (InterruptedException e) { + } + } + }; + }; + + RedissonClient redisson2 = createInstance(); + RBlockingFairQueue queue2 = redisson2.getBlockingFairQueue("test"); + Thread t2 = new Thread("test-thread2") { + public void run() { + while (true) { + try { + String a = queue2.poll(1, TimeUnit.SECONDS); + if (a == null) { + break; + } + Thread.sleep(50); + latch.countDown(); + t2Counter.incrementAndGet(); + } catch (InterruptedException e) { + } + } + }; + }; + + RedissonClient redisson3 = createInstance(); + RBlockingFairQueue queue3 = redisson3.getBlockingFairQueue("test"); + Thread t3 = new Thread("test-thread3") { + public void run() { + while (true) { + try { + String a = queue3.poll(1, TimeUnit.SECONDS); + if (a == null) { + break; + } + Thread.sleep(10); + latch.countDown(); + t3Counter.incrementAndGet(); + } catch (InterruptedException e) { + } + } + }; + }; + + RedissonClient redisson4 = createInstance(); + RBlockingFairQueue queue4 = redisson4.getBlockingFairQueue("test"); + Thread t4 = new Thread("test-thread4") { + public void run() { + while (true) { + try { + String a = queue4.poll(1, TimeUnit.SECONDS); + if (a == null) { + break; + } + latch.countDown(); + t4Counter.incrementAndGet(); + } catch (InterruptedException e) { + } + } + }; + }; + + RBlockingQueue queue = redisson.getBlockingFairQueue("test"); + for (int i = 0; i < size; i++) { + queue.add("" + i); + } + + t1.start(); + t2.start(); + t3.start(); + t4.start(); + + t1.join(); + t2.join(); + t3.join(); + t4.join(); + + assertThat(latch.await(5, TimeUnit.SECONDS)).isTrue(); + + queue1.destroy(); + queue2.destroy(); + queue3.destroy(); + queue4.destroy(); + redisson1.shutdown(); + redisson2.shutdown(); + redisson3.shutdown(); + redisson4.shutdown(); + + assertThat(t1Counter.get()).isEqualTo(250); + assertThat(t2Counter.get()).isEqualTo(250); + assertThat(t3Counter.get()).isEqualTo(250); + assertThat(t4Counter.get()).isEqualTo(250); + assertThat(redisson.getKeys().count()).isEqualTo(1); + } + +} + diff --git a/redisson/src/test/java/org/redisson/RedissonBlockingQueueTest.java b/redisson/src/test/java/org/redisson/RedissonBlockingQueueTest.java index 76b0d31cc..abf7774ca 100644 --- a/redisson/src/test/java/org/redisson/RedissonBlockingQueueTest.java +++ b/redisson/src/test/java/org/redisson/RedissonBlockingQueueTest.java @@ -48,6 +48,15 @@ public class RedissonBlockingQueueTest extends BaseTest { long start = System.currentTimeMillis(); assertThat(f.get()).isNull(); assertThat(System.currentTimeMillis() - start).isGreaterThan(3800); + + redisson.shutdown(); + } + + @Test(timeout = 3000) + public void testShortPoll() throws InterruptedException { + RBlockingQueue queue = redisson.getBlockingQueue("queue:pollany"); + queue.poll(500, TimeUnit.MILLISECONDS); + queue.poll(10, TimeUnit.MICROSECONDS); } @Test @@ -99,6 +108,7 @@ public class RedissonBlockingQueueTest extends BaseTest { await().atMost(5, TimeUnit.SECONDS).until(() -> assertThat(executed.get()).isTrue()); + redisson.shutdown(); runner.stop(); } @@ -134,6 +144,8 @@ public class RedissonBlockingQueueTest extends BaseTest { Integer result = f.get(1, TimeUnit.SECONDS); assertThat(result).isEqualTo(123); + + redisson.shutdown(); runner.stop(); } @@ -170,6 +182,8 @@ public class RedissonBlockingQueueTest extends BaseTest { Integer result = f.get(1, TimeUnit.SECONDS); assertThat(result).isEqualTo(123); runner.stop(); + + redisson.shutdown(); } @Test diff --git a/redisson/src/test/java/org/redisson/RedissonBloomFilterTest.java b/redisson/src/test/java/org/redisson/RedissonBloomFilterTest.java index 61f9addc9..3491d495e 100644 --- a/redisson/src/test/java/org/redisson/RedissonBloomFilterTest.java +++ b/redisson/src/test/java/org/redisson/RedissonBloomFilterTest.java @@ -7,6 +7,24 @@ import static org.assertj.core.api.Assertions.*; public class RedissonBloomFilterTest extends BaseTest { + @Test(expected = IllegalArgumentException.class) + public void testFalseProbability1() { + RBloomFilter filter = redisson.getBloomFilter("filter"); + filter.tryInit(1, -1); + } + + @Test(expected = IllegalArgumentException.class) + public void testFalseProbability2() { + RBloomFilter filter = redisson.getBloomFilter("filter"); + filter.tryInit(1, 2); + } + + @Test(expected = IllegalArgumentException.class) + public void testSizeZero() { + RBloomFilter filter = redisson.getBloomFilter("filter"); + filter.tryInit(1, 1); + } + @Test public void testConfig() { RBloomFilter filter = redisson.getBloomFilter("filter"); diff --git a/redisson/src/test/java/org/redisson/RedissonBoundedBlockingQueueTest.java b/redisson/src/test/java/org/redisson/RedissonBoundedBlockingQueueTest.java index 8e29512cb..2e9cab970 100644 --- a/redisson/src/test/java/org/redisson/RedissonBoundedBlockingQueueTest.java +++ b/redisson/src/test/java/org/redisson/RedissonBoundedBlockingQueueTest.java @@ -238,6 +238,8 @@ public class RedissonBoundedBlockingQueueTest extends BaseTest { long start = System.currentTimeMillis(); assertThat(f.get()).isNull(); assertThat(System.currentTimeMillis() - start).isGreaterThan(3800); + + redisson.shutdown(); } @Test @@ -292,6 +294,7 @@ public class RedissonBoundedBlockingQueueTest extends BaseTest { await().atMost(5, TimeUnit.SECONDS).until(() -> assertThat(executed.get()).isTrue()); + redisson.shutdown(); runner.stop(); } @@ -328,6 +331,8 @@ public class RedissonBoundedBlockingQueueTest extends BaseTest { Integer result = f.get(1, TimeUnit.SECONDS); assertThat(result).isEqualTo(123); + + redisson.shutdown(); runner.stop(); } @@ -368,6 +373,8 @@ public class RedissonBoundedBlockingQueueTest extends BaseTest { Integer result = f.get(1, TimeUnit.SECONDS); assertThat(result).isEqualTo(123); runner.stop(); + + redisson.shutdown(); } @Test diff --git a/redisson/src/test/java/org/redisson/RedissonCodecTest.java b/redisson/src/test/java/org/redisson/RedissonCodecTest.java index bab1670ea..372c9130f 100644 --- a/redisson/src/test/java/org/redisson/RedissonCodecTest.java +++ b/redisson/src/test/java/org/redisson/RedissonCodecTest.java @@ -4,6 +4,7 @@ import com.fasterxml.jackson.core.type.TypeReference; import org.junit.Assert; import org.junit.Test; import org.redisson.api.RMap; +import org.redisson.api.RedissonClient; import org.redisson.client.codec.Codec; import org.redisson.client.codec.JsonJacksonMapValueCodec; import org.redisson.codec.CborJacksonCodec; @@ -44,90 +45,90 @@ public class RedissonCodecTest extends BaseTest { public void testLZ4() { Config config = createConfig(); config.setCodec(lz4Codec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testJdk() { Config config = createConfig(); config.setCodec(codec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testMsgPack() { Config config = createConfig(); config.setCodec(msgPackCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testSmile() { Config config = createConfig(); config.setCodec(smileCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testAvro() { Config config = createConfig(); config.setCodec(avroCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testFst() { Config config = createConfig(); config.setCodec(fstCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testSnappy() { Config config = createConfig(); config.setCodec(snappyCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testJson() { Config config = createConfig(); config.setCodec(jsonCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testKryo() { Config config = createConfig(); config.setCodec(kryoCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @Test public void testCbor() { Config config = createConfig(); config.setCodec(cborCodec); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); - test(); + test(redisson); } @@ -135,7 +136,7 @@ public class RedissonCodecTest extends BaseTest { public void testListOfStrings() { Config config = createConfig(); config.setCodec(new JsonJacksonCodec()); - redisson = Redisson.create(config); + RedissonClient redisson = Redisson.create(config); RMap> map = redisson.getMap("list of strings", jsonListOfStringCodec); map.put("foo", new ArrayList(Arrays.asList("bar"))); @@ -143,9 +144,11 @@ public class RedissonCodecTest extends BaseTest { RMap> map2 = redisson.getMap("list of strings", jsonListOfStringCodec); assertThat(map2).isEqualTo(map); + + redisson.shutdown(); } - public void test() { + public void test(RedissonClient redisson) { RMap> map = redisson.getMap("getAll"); Map a = new HashMap(); a.put("double", new Double(100000.0)); @@ -172,5 +175,7 @@ public class RedissonCodecTest extends BaseTest { Assert.assertTrue(set.contains(new TestObject("2", "3"))); Assert.assertTrue(set.contains(new TestObject("1", "2"))); Assert.assertFalse(set.contains(new TestObject("1", "9"))); + + redisson.shutdown(); } } diff --git a/redisson/src/test/java/org/redisson/RedissonDelayedQueueTest.java b/redisson/src/test/java/org/redisson/RedissonDelayedQueueTest.java new file mode 100644 index 000000000..8ab730438 --- /dev/null +++ b/redisson/src/test/java/org/redisson/RedissonDelayedQueueTest.java @@ -0,0 +1,248 @@ +package org.redisson; + +import static org.assertj.core.api.Assertions.assertThat; +import java.util.Arrays; +import java.util.concurrent.TimeUnit; + +import org.junit.Test; +import org.redisson.api.RBlockingFairQueue; +import org.redisson.api.RDelayedQueue; +import org.redisson.api.RQueue; + +public class RedissonDelayedQueueTest extends BaseTest { + + @Test + public void testDealyedQueueRetainAll() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.retainAll(Arrays.asList(1, 2, 3))).isFalse(); + assertThat(dealyedQueue.retainAll(Arrays.asList(3, 1, 2, 8))).isFalse(); + assertThat(dealyedQueue.readAll()).containsExactly(3, 1, 2); + + assertThat(dealyedQueue.retainAll(Arrays.asList(1, 2))).isTrue(); + assertThat(dealyedQueue.readAll()).containsExactly(1, 2); + + dealyedQueue.destroy(); + } + + + @Test + public void testDealyedQueueReadAll() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.readAll()).containsExactly(3, 1, 2); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueueRemoveAll() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.removeAll(Arrays.asList(1, 2))).isTrue(); + assertThat(dealyedQueue).containsExactly(3); + assertThat(dealyedQueue.removeAll(Arrays.asList(3, 4))).isTrue(); + assertThat(dealyedQueue).isEmpty(); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueueContainsAll() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.containsAll(Arrays.asList(1, 2))).isTrue(); + assertThat(dealyedQueue.containsAll(Arrays.asList(1, 2, 4))).isFalse(); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueueContains() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.contains(1)).isTrue(); + assertThat(dealyedQueue.contains(4)).isFalse(); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueueRemove() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.remove(4)).isFalse(); + assertThat(dealyedQueue.remove(3)).isTrue(); + assertThat(dealyedQueue).containsExactly(1, 2); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueuePeek() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(1, 2, TimeUnit.SECONDS); + dealyedQueue.offer(2, 1, TimeUnit.SECONDS); + + assertThat(dealyedQueue.peek()).isEqualTo(3); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueuePollLastAndOfferFirstTo() { + RBlockingFairQueue queue1 = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue1); + + dealyedQueue.offer(3, 5, TimeUnit.SECONDS); + dealyedQueue.offer(2, 2, TimeUnit.SECONDS); + dealyedQueue.offer(1, 1, TimeUnit.SECONDS); + + RQueue queue2 = redisson.getQueue("deque2"); + queue2.offer(6); + queue2.offer(5); + queue2.offer(4); + + assertThat(dealyedQueue.pollLastAndOfferFirstTo(queue2.getName())).isEqualTo(1); + assertThat(queue2).containsExactly(1, 6, 5, 4); + + dealyedQueue.destroy(); + } + + @Test + public void testDelayedQueueOrder() { + RBlockingFairQueue queue = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue); + + dealyedQueue.offer("1", 1, TimeUnit.SECONDS); + dealyedQueue.offer("4", 4, TimeUnit.SECONDS); + dealyedQueue.offer("3", 3, TimeUnit.SECONDS); + dealyedQueue.offer("2", 2, TimeUnit.SECONDS); + + assertThat(dealyedQueue).containsExactly("1", "4", "3", "2"); + + assertThat(dealyedQueue.poll()).isEqualTo("1"); + assertThat(dealyedQueue.poll()).isEqualTo("4"); + assertThat(dealyedQueue.poll()).isEqualTo("3"); + assertThat(dealyedQueue.poll()).isEqualTo("2"); + + assertThat(queue.isEmpty()).isTrue(); + + assertThat(queue.poll()).isNull(); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueuePoll() throws InterruptedException { + RBlockingFairQueue queue = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue); + + dealyedQueue.offer("1", 1, TimeUnit.SECONDS); + dealyedQueue.offer("2", 2, TimeUnit.SECONDS); + dealyedQueue.offer("3", 3, TimeUnit.SECONDS); + dealyedQueue.offer("4", 4, TimeUnit.SECONDS); + + assertThat(dealyedQueue.poll()).isEqualTo("1"); + assertThat(dealyedQueue.poll()).isEqualTo("2"); + assertThat(dealyedQueue.poll()).isEqualTo("3"); + assertThat(dealyedQueue.poll()).isEqualTo("4"); + + Thread.sleep(3000); + assertThat(queue.isEmpty()).isTrue(); + + assertThat(queue.poll()).isNull(); + assertThat(queue.poll()).isNull(); + + dealyedQueue.destroy(); + } + + @Test + public void testDealyedQueue() throws InterruptedException { + RBlockingFairQueue queue = redisson.getBlockingFairQueue("test"); + RDelayedQueue dealyedQueue = redisson.getDelayedQueue(queue); + + dealyedQueue.offer("1", 1, TimeUnit.SECONDS); + dealyedQueue.offer("2", 5, TimeUnit.SECONDS); + dealyedQueue.offer("4", 4, TimeUnit.SECONDS); + dealyedQueue.offer("2", 2, TimeUnit.SECONDS); + dealyedQueue.offer("3", 3, TimeUnit.SECONDS); + + assertThat(dealyedQueue).containsExactly("1", "2", "4", "2", "3"); + + Thread.sleep(500); + assertThat(queue.isEmpty()).isTrue(); + Thread.sleep(600); + assertThat(queue).containsExactly("1"); + assertThat(dealyedQueue).containsExactly("2", "4", "2", "3"); + + Thread.sleep(500); + assertThat(queue).containsExactly("1"); + + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2"); + assertThat(dealyedQueue).containsExactly("2", "4", "3"); + + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2"); + + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2", "3"); + assertThat(dealyedQueue).containsExactly("2", "4"); + + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2", "3"); + + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2", "3", "4"); + + assertThat(dealyedQueue).containsExactly("2"); + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2", "3", "4"); + Thread.sleep(500); + assertThat(queue).containsExactly("1", "2", "3", "4", "2"); + + assertThat(dealyedQueue).isEmpty(); + + assertThat(queue.poll()).isEqualTo("1"); + assertThat(queue.poll()).isEqualTo("2"); + assertThat(queue.poll()).isEqualTo("3"); + assertThat(queue.poll()).isEqualTo("4"); + assertThat(queue.poll()).isEqualTo("2"); + + dealyedQueue.destroy(); + } + + + +} diff --git a/redisson/src/test/java/org/redisson/RedissonFairLockTest.java b/redisson/src/test/java/org/redisson/RedissonFairLockTest.java index 0780413b4..4221f1109 100644 --- a/redisson/src/test/java/org/redisson/RedissonFairLockTest.java +++ b/redisson/src/test/java/org/redisson/RedissonFairLockTest.java @@ -15,6 +15,63 @@ import org.redisson.api.RLock; public class RedissonFairLockTest extends BaseConcurrentTest { + @Test + public void testTryLockNonDelayed() throws InterruptedException { + String LOCK_NAME = "SOME_LOCK"; + + Thread t1 = new Thread(() -> { + RLock fairLock = redisson.getFairLock(LOCK_NAME); + try { + if (fairLock.tryLock(0, TimeUnit.SECONDS)) { + try { + Thread.sleep(1000L); + } catch (InterruptedException e) { + e.printStackTrace(); + } + } else { + Assert.fail("Unable to acquire lock for some reason"); + } + } catch (InterruptedException e) { + e.printStackTrace(); + } finally { + fairLock.unlock(); + } + }); + + Thread t2 = new Thread(() -> { + try { + Thread.sleep(200L); + } catch (InterruptedException e) { + e.printStackTrace(); + } + RLock fairLock = redisson.getFairLock(LOCK_NAME); + try { + if (fairLock.tryLock(200, TimeUnit.MILLISECONDS)) { + Assert.fail("Should not be inside second block"); + } + } catch (InterruptedException e) { + e.printStackTrace(); + } finally { + fairLock.unlock(); + } + }); + + t1.start(); + t2.start(); + + t1.join(); + t2.join(); + + RLock fairLock = redisson.getFairLock(LOCK_NAME); + try { + if (!fairLock.tryLock(0, TimeUnit.SECONDS)) { + Assert.fail("Could not get unlocked lock " + LOCK_NAME); + } + } finally { + fairLock.unlock(); + } + } + @Test public void testTryLockWait() throws InterruptedException { testSingleInstanceConcurrency(1, r -> { @@ -58,14 +115,12 @@ public class RedissonFairLockTest extends BaseConcurrentTest { Thread t = new Thread() { public void run() { RLock lock1 = redisson.getFairLock("lock"); - System.out.println("0"); lock1.lock(); - System.out.println("1"); + long spendTime = System.currentTimeMillis() - startTime; System.out.println(spendTime); Assert.assertTrue(spendTime < 2020); lock1.unlock(); - System.out.println("3"); }; }; diff --git a/redisson/src/test/java/org/redisson/RedissonKeysReactiveTest.java b/redisson/src/test/java/org/redisson/RedissonKeysReactiveTest.java index 87ad95235..ae5bebb96 100644 --- a/redisson/src/test/java/org/redisson/RedissonKeysReactiveTest.java +++ b/redisson/src/test/java/org/redisson/RedissonKeysReactiveTest.java @@ -12,10 +12,10 @@ public class RedissonKeysReactiveTest extends BaseReactiveTest { @Test public void testKeysIterablePattern() { - redisson.getBucket("test1").set("someValue"); - redisson.getBucket("test2").set("someValue"); + sync(redisson.getBucket("test1").set("someValue")); + sync(redisson.getBucket("test2").set("someValue")); - redisson.getBucket("test12").set("someValue"); + sync(redisson.getBucket("test12").set("someValue")); Iterator iterator = toIterator(redisson.getKeys().getKeysByPattern("test?")); for (; iterator.hasNext();) { diff --git a/redisson/src/test/java/org/redisson/RedissonKeysTest.java b/redisson/src/test/java/org/redisson/RedissonKeysTest.java index 5614ed82a..5127d732a 100644 --- a/redisson/src/test/java/org/redisson/RedissonKeysTest.java +++ b/redisson/src/test/java/org/redisson/RedissonKeysTest.java @@ -15,6 +15,17 @@ import org.redisson.api.RType; public class RedissonKeysTest extends BaseTest { + @Test + public void testExists() { + redisson.getSet("test").add("1"); + redisson.getSet("test10").add("1"); + + assertThat(redisson.getKeys().isExists("test")).isEqualTo(1); + assertThat(redisson.getKeys().isExists("test", "test2")).isEqualTo(1); + assertThat(redisson.getKeys().isExists("test3", "test2")).isEqualTo(0); + assertThat(redisson.getKeys().isExists("test3", "test10", "test")).isEqualTo(2); + } + @Test public void testType() { redisson.getSet("test").add("1"); diff --git a/redisson/src/test/java/org/redisson/RedissonListTest.java b/redisson/src/test/java/org/redisson/RedissonListTest.java index 347111a4d..5ae83ad9f 100644 --- a/redisson/src/test/java/org/redisson/RedissonListTest.java +++ b/redisson/src/test/java/org/redisson/RedissonListTest.java @@ -4,6 +4,7 @@ import static org.assertj.core.api.Assertions.assertThat; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.Iterator; import java.util.LinkedList; @@ -13,10 +14,174 @@ import java.util.ListIterator; import org.junit.Assert; import org.junit.Test; import org.redisson.api.RList; +import org.redisson.api.SortOrder; import org.redisson.client.RedisException; +import org.redisson.client.codec.IntegerCodec; +import org.redisson.client.codec.StringCodec; public class RedissonListTest extends BaseTest { + @Test + public void testSortOrder() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + List descSort = list.readSort(SortOrder.DESC); + assertThat(descSort).containsExactly(3, 2, 1); + + List ascSort = list.readSort(SortOrder.ASC); + assertThat(ascSort).containsExactly(1, 2, 3); + } + + @Test + public void testSortOrderLimit() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + List descSort = list.readSort(SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly(2, 1); + + List ascSort = list.readSort(SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly(2, 3); + } + + @Test + public void testSortOrderByPattern() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + List descSort = list.readSort("test*", SortOrder.DESC); + assertThat(descSort).containsExactly(1, 2, 3); + + List ascSort = list.readSort("test*", SortOrder.ASC); + assertThat(ascSort).containsExactly(3, 2, 1); + } + + @Test + public void testSortOrderByPatternLimit() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + List descSort = list.readSort("test*", SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly(2, 3); + + List ascSort = list.readSort("test*", SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly(2, 1); + } + + @Test + public void testSortOrderByPatternGet() { + RList list = redisson.getList("list", StringCodec.INSTANCE); + list.add("1"); + list.add("2"); + list.add("3"); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(1); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(3); + + redisson.getBucket("tester1", StringCodec.INSTANCE).set("obj1"); + redisson.getBucket("tester2", StringCodec.INSTANCE).set("obj2"); + redisson.getBucket("tester3", StringCodec.INSTANCE).set("obj3"); + + Collection descSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.DESC); + assertThat(descSort).containsExactly("obj3", "obj2", "obj1"); + + Collection ascSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.ASC); + assertThat(ascSort).containsExactly("obj1", "obj2", "obj3"); + } + + @Test + public void testSortOrderByPatternGetLimit() { + RList list = redisson.getList("list", StringCodec.INSTANCE); + list.add("1"); + list.add("2"); + list.add("3"); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(1); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(3); + + redisson.getBucket("tester1", StringCodec.INSTANCE).set("obj1"); + redisson.getBucket("tester2", StringCodec.INSTANCE).set("obj2"); + redisson.getBucket("tester3", StringCodec.INSTANCE).set("obj3"); + + Collection descSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly("obj2", "obj1"); + + Collection ascSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly("obj2", "obj3"); + } + + @Test + public void testSortTo() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add("1"); + list.add("2"); + list.add("3"); + + assertThat(list.sortTo("test3", SortOrder.DESC)).isEqualTo(3); + RList list2 = redisson.getList("test3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("3", "2", "1"); + + assertThat(list.sortTo("test4", SortOrder.ASC)).isEqualTo(3); + RList list3 = redisson.getList("test4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("1", "2", "3"); + + } + + @Test + public void testSortToLimit() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + assertThat(list.sortTo("test3", SortOrder.DESC, 1, 2)).isEqualTo(2); + RList list2 = redisson.getList("test3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("2", "1"); + + assertThat(list.sortTo("test4", SortOrder.ASC, 1, 2)).isEqualTo(2); + RList list3 = redisson.getList("test4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("2", "3"); + } + + @Test + public void testSortToByPattern() { + RList list = redisson.getList("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + assertThat(list.sortTo("tester3", "test*", SortOrder.DESC, 1, 2)).isEqualTo(2); + RList list2 = redisson.getList("tester3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("2", "3"); + + assertThat(list.sortTo("tester4", "test*", SortOrder.ASC, 1, 2)).isEqualTo(2); + RList list3 = redisson.getList("tester4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("2", "1"); + } + @Test public void testAddBefore() { RList list = redisson.getList("list"); diff --git a/redisson/src/test/java/org/redisson/RedissonLocalCachedMapSerializationCodecTest.java b/redisson/src/test/java/org/redisson/RedissonLocalCachedMapSerializationCodecTest.java new file mode 100644 index 000000000..851250641 --- /dev/null +++ b/redisson/src/test/java/org/redisson/RedissonLocalCachedMapSerializationCodecTest.java @@ -0,0 +1,52 @@ +package org.redisson; + +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.redisson.api.RedissonClient; +import org.redisson.codec.SerializationCodec; +import org.redisson.config.Config; + +import java.io.IOException; + +/** + * Created by jribble on 1/12/17. + */ +public class RedissonLocalCachedMapSerializationCodecTest extends RedissonLocalCachedMapTest { + public static Config createConfig() { + Config config = RedissonLocalCachedMapTest.createConfig(); + config.setCodec(new SerializationCodec()); + return config; + } + + public static RedissonClient createInstance() { + Config config = createConfig(); + return Redisson.create(config); + } + + @BeforeClass + public static void beforeClass() throws IOException, InterruptedException { + if (!RedissonRuntimeEnvironment.isTravis) { + RedisRunner.startDefaultRedisServerInstance(); + defaultRedisson = createInstance(); + } + } + + @Before + public void before() throws IOException, InterruptedException { + if (RedissonRuntimeEnvironment.isTravis) { + RedisRunner.startDefaultRedisServerInstance(); + redisson = createInstance(); + } else { + if (redisson == null) { + redisson = defaultRedisson; + } + redisson.getKeys().flushall(); + } + } + + @Test @Override + public void testAddAndGet() throws InterruptedException { + // this method/test won't work with Java Serialization + } +} diff --git a/redisson/src/test/java/org/redisson/RedissonLocalCachedMapTest.java b/redisson/src/test/java/org/redisson/RedissonLocalCachedMapTest.java index 98ba2b63d..9a8044c6b 100644 --- a/redisson/src/test/java/org/redisson/RedissonLocalCachedMapTest.java +++ b/redisson/src/test/java/org/redisson/RedissonLocalCachedMapTest.java @@ -27,7 +27,7 @@ import mockit.Deencapsulation; public class RedissonLocalCachedMapTest extends BaseTest { -// @Test + // @Test public void testPerf() { LocalCachedMapOptions options = LocalCachedMapOptions.defaults().evictionPolicy(EvictionPolicy.LFU).cacheSize(100000).invalidateEntryOnChange(true); Map map = redisson.getLocalCachedMap("test", options); @@ -323,7 +323,7 @@ public class RedissonLocalCachedMapTest extends BaseTest { } @Test - public void testPutAll() { + public void testPutAll() throws InterruptedException { Map map = redisson.getLocalCachedMap("simple", LocalCachedMapOptions.defaults()); Map map1 = redisson.getLocalCachedMap("simple", LocalCachedMapOptions.defaults()); Cache cache = Deencapsulation.getField(map, "cache"); @@ -344,6 +344,9 @@ public class RedissonLocalCachedMapTest extends BaseTest { map1.putAll(joinMap); + // waiting for cache cleanup listeners triggering + Thread.sleep(500); + assertThat(cache.size()).isEqualTo(3); assertThat(cache1.size()).isEqualTo(3); } diff --git a/redisson/src/test/java/org/redisson/RedissonMapCacheReactiveTest.java b/redisson/src/test/java/org/redisson/RedissonMapCacheReactiveTest.java index d366c993e..64f433597 100644 --- a/redisson/src/test/java/org/redisson/RedissonMapCacheReactiveTest.java +++ b/redisson/src/test/java/org/redisson/RedissonMapCacheReactiveTest.java @@ -141,8 +141,6 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { Map filteredAgain = sync(map.getAll(new HashSet(Arrays.asList(2, 3, 5)))); Assert.assertTrue(filteredAgain.isEmpty()); - Thread.sleep(100); - Assert.assertEquals(2, sync(map.size()).intValue()); } @Test @@ -164,11 +162,11 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { @Test public void testExpiredIterator() throws InterruptedException { RMapCacheReactive cache = redisson.getMapCache("simple"); - cache.put("0", "8"); - cache.put("1", "6", 1, TimeUnit.SECONDS); - cache.put("2", "4", 3, TimeUnit.SECONDS); - cache.put("3", "2", 4, TimeUnit.SECONDS); - cache.put("4", "4", 1, TimeUnit.SECONDS); + sync(cache.put("0", "8")); + sync(cache.put("1", "6", 1, TimeUnit.SECONDS)); + sync(cache.put("2", "4", 3, TimeUnit.SECONDS)); + sync(cache.put("3", "2", 4, TimeUnit.SECONDS)); + sync(cache.put("4", "4", 1, TimeUnit.SECONDS)); Thread.sleep(1000); @@ -254,8 +252,6 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { Thread.sleep(1000); Assert.assertFalse(sync(map.containsValue(new SimpleValue("44")))); - Thread.sleep(50); - Assert.assertEquals(0, sync(map.size()).intValue()); } @Test @@ -269,8 +265,6 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { Thread.sleep(1000); Assert.assertFalse(sync(map.containsKey(new SimpleKey("33")))); - Thread.sleep(50); - Assert.assertEquals(0, sync(map.size()).intValue()); } @Test @@ -320,8 +314,6 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { Thread.sleep(1000); Assert.assertNull(sync(map.get(new SimpleKey("33")))); - Thread.sleep(50); - Assert.assertEquals(0, sync(map.size()).intValue()); } @Test @@ -380,7 +372,7 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { @Test public void testKeyIterator() { - RMapReactive map = redisson.getMap("simple"); + RMapReactive map = redisson.getMapCache("simple"); sync(map.put(1, 0)); sync(map.put(3, 5)); sync(map.put(4, 6)); @@ -399,7 +391,7 @@ public class RedissonMapCacheReactiveTest extends BaseReactiveTest { @Test public void testValueIterator() { - RMapReactive map = redisson.getMap("simple"); + RMapReactive map = redisson.getMapCache("simple"); sync(map.put(1, 0)); sync(map.put(3, 5)); sync(map.put(4, 6)); diff --git a/redisson/src/test/java/org/redisson/RedissonMapCacheTest.java b/redisson/src/test/java/org/redisson/RedissonMapCacheTest.java index 1e64420ca..3c3d95904 100644 --- a/redisson/src/test/java/org/redisson/RedissonMapCacheTest.java +++ b/redisson/src/test/java/org/redisson/RedissonMapCacheTest.java @@ -7,6 +7,7 @@ import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; +import java.util.LinkedHashMap; import java.util.Map; import java.util.Map.Entry; import java.util.concurrent.ConcurrentMap; @@ -17,12 +18,11 @@ import java.util.concurrent.TimeUnit; import org.junit.Assert; import org.junit.Test; import org.redisson.api.RFuture; +import org.redisson.api.RMap; import org.redisson.api.RMapCache; import org.redisson.codec.JsonJacksonCodec; import org.redisson.codec.MsgPackJacksonCodec; -import io.netty.util.concurrent.Future; - public class RedissonMapCacheTest extends BaseTest { public static class SimpleKey implements Serializable { @@ -127,6 +127,37 @@ public class RedissonMapCacheTest extends BaseTest { } + @Test + public void testOrdering() { + Map map = new LinkedHashMap(); + + // General player data + map.put("name", "123"); + map.put("ip", "4124"); + map.put("rank", "none"); + map.put("tokens", "0"); + map.put("coins", "0"); + + // Arsenal player statistics + map.put("ar_score", "0"); + map.put("ar_gameswon", "0"); + map.put("ar_gameslost", "0"); + map.put("ar_kills", "0"); + map.put("ar_deaths", "0"); + + RMap rmap = redisson.getMapCache("123"); + rmap.putAll(map); + + assertThat(rmap.keySet()).containsExactlyElementsOf(map.keySet()); + assertThat(rmap.readAllKeySet()).containsExactlyElementsOf(map.keySet()); + + assertThat(rmap.values()).containsExactlyElementsOf(map.values()); + assertThat(rmap.readAllValues()).containsExactlyElementsOf(map.values()); + + assertThat(rmap.entrySet()).containsExactlyElementsOf(map.entrySet()); + assertThat(rmap.readAllEntrySet()).containsExactlyElementsOf(map.entrySet()); + } + @Test public void testCacheValues() { final RMapCache map = redisson.getMapCache("testRMapCacheValues"); @@ -622,6 +653,15 @@ public class RedissonMapCacheTest extends BaseTest { SimpleValue value1 = new SimpleValue("4"); assertThat(map.fastPutIfAbsent(key1, value1)).isTrue(); assertThat(map.get(key1)).isEqualTo(value1); + + SimpleKey key2 = new SimpleKey("3"); + map.put(key2, new SimpleValue("31"), 500, TimeUnit.MILLISECONDS); + assertThat(map.fastPutIfAbsent(key2, new SimpleValue("32"))).isFalse(); + + Thread.sleep(500); + assertThat(map.fastPutIfAbsent(key2, new SimpleValue("32"))).isTrue(); + assertThat(map.get(key2)).isEqualTo(new SimpleValue("32")); + } @Test diff --git a/redisson/src/test/java/org/redisson/RedissonMapReactiveTest.java b/redisson/src/test/java/org/redisson/RedissonMapReactiveTest.java index 5128055b5..e11a61a52 100644 --- a/redisson/src/test/java/org/redisson/RedissonMapReactiveTest.java +++ b/redisson/src/test/java/org/redisson/RedissonMapReactiveTest.java @@ -205,23 +205,6 @@ public class RedissonMapReactiveTest extends BaseReactiveTest { Assert.assertEquals(4L, val2.longValue()); } - @Test - public void testNull() { - RMapReactive map = redisson.getMap("simple12"); - sync(map.put(1, null)); - sync(map.put(2, null)); - sync(map.put(3, "43")); - - Assert.assertEquals(3, sync(map.size()).intValue()); - - String val = sync(map.get(2)); - Assert.assertNull(val); - String val2 = sync(map.get(1)); - Assert.assertNull(val2); - String val3 = sync(map.get(3)); - Assert.assertEquals("43", val3); - } - @Test public void testSimpleTypes() { RMapReactive map = redisson.getMap("simple12"); diff --git a/redisson/src/test/java/org/redisson/RedissonMapTest.java b/redisson/src/test/java/org/redisson/RedissonMapTest.java index 6db62557f..81496e16c 100644 --- a/redisson/src/test/java/org/redisson/RedissonMapTest.java +++ b/redisson/src/test/java/org/redisson/RedissonMapTest.java @@ -162,6 +162,31 @@ public class RedissonMapTest extends BaseTest { assertThat(map.valueSize("4")).isZero(); assertThat(map.valueSize("1")).isEqualTo(6); } + + @Test + public void testGetAllOrder() { + RMap map = redisson.getMap("getAll"); + map.put(1, 100); + map.put(2, 200); + map.put(3, 300); + map.put(4, 400); + map.put(5, 500); + map.put(6, 600); + map.put(7, 700); + map.put(8, 800); + + Map filtered = map.getAll(new HashSet(Arrays.asList(2, 3, 5, 1, 7, 8))); + + Map expectedMap = new LinkedHashMap(); + expectedMap.put(1, 100); + expectedMap.put(2, 200); + expectedMap.put(3, 300); + expectedMap.put(5, 500); + expectedMap.put(7, 700); + expectedMap.put(8, 800); + + assertThat(filtered.entrySet()).containsExactlyElementsOf(expectedMap.entrySet()); + } @Test public void testGetAll() { @@ -291,21 +316,16 @@ public class RedissonMapTest extends BaseTest { assertThat(counter).isEqualTo(size); } - @Test - public void testNull() { + @Test(expected = NullPointerException.class) + public void testNullValue() { Map map = redisson.getMap("simple12"); map.put(1, null); - map.put(2, null); - map.put(3, "43"); - - assertThat(map.size()).isEqualTo(3); - - String val = map.get(2); - assertThat(val).isNull(); - String val2 = map.get(1); - assertThat(val2).isNull(); - String val3 = map.get(3); - assertThat(val3).isEqualTo("43"); + } + + @Test(expected = NullPointerException.class) + public void testNullKey() { + Map map = redisson.getMap("simple12"); + map.put(null, "1"); } @Test diff --git a/redisson/src/test/java/org/redisson/RedissonMultiLockTest.java b/redisson/src/test/java/org/redisson/RedissonMultiLockTest.java index 8ea0b42af..0a3d832ed 100644 --- a/redisson/src/test/java/org/redisson/RedissonMultiLockTest.java +++ b/redisson/src/test/java/org/redisson/RedissonMultiLockTest.java @@ -49,6 +49,8 @@ public class RedissonMultiLockTest { lock.lock(); lock.unlock(); + client.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); } @@ -89,6 +91,10 @@ public class RedissonMultiLockTest { lock.unlock(); + client1.shutdown(); + client2.shutdown(); + client3.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); assertThat(redis2.stop()).isEqualTo(0); diff --git a/redisson/src/test/java/org/redisson/RedissonPermitExpirableSemaphoreTest.java b/redisson/src/test/java/org/redisson/RedissonPermitExpirableSemaphoreTest.java index 43a7db04a..870e3a75d 100644 --- a/redisson/src/test/java/org/redisson/RedissonPermitExpirableSemaphoreTest.java +++ b/redisson/src/test/java/org/redisson/RedissonPermitExpirableSemaphoreTest.java @@ -13,6 +13,12 @@ import org.redisson.client.RedisException; public class RedissonPermitExpirableSemaphoreTest extends BaseConcurrentTest { + @Test + public void testNotExistent() { + RPermitExpirableSemaphore semaphore = redisson.getPermitExpirableSemaphore("testSemaphoreForNPE"); + Assert.assertEquals(0, semaphore.availablePermits()); + } + @Test public void testAvailablePermits() throws InterruptedException { RPermitExpirableSemaphore semaphore = redisson.getPermitExpirableSemaphore("test-semaphore"); diff --git a/redisson/src/test/java/org/redisson/RedissonPriorityQueueTest.java b/redisson/src/test/java/org/redisson/RedissonPriorityQueueTest.java new file mode 100644 index 000000000..1a57157cd --- /dev/null +++ b/redisson/src/test/java/org/redisson/RedissonPriorityQueueTest.java @@ -0,0 +1,240 @@ +package org.redisson; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.util.Arrays; +import java.util.Collections; +import java.util.Iterator; +import java.util.PriorityQueue; +import java.util.Queue; + +import org.junit.Assert; +import org.junit.Test; +import org.redisson.api.RPriorityQueue; + +public class RedissonPriorityQueueTest extends BaseTest { + + @Test + public void testReadAll() { + RPriorityQueue set = redisson.getPriorityQueue("simple"); + set.add(2); + set.add(0); + set.add(1); + set.add(5); + + assertThat(set.readAll()).containsExactly(0, 1, 2, 5); + } + + @Test + public void testIteratorNextNext() { + RPriorityQueue list = redisson.getPriorityQueue("simple"); + list.add("1"); + list.add("4"); + + Iterator iter = list.iterator(); + Assert.assertEquals("1", iter.next()); + Assert.assertEquals("4", iter.next()); + Assert.assertFalse(iter.hasNext()); + } + + @Test + public void testIteratorRemove() { + RPriorityQueue list = redisson.getPriorityQueue("list"); + list.add("1"); + list.add("4"); + list.add("2"); + list.add("5"); + list.add("3"); + + for (Iterator iterator = list.iterator(); iterator.hasNext();) { + String value = iterator.next(); + if (value.equals("2")) { + iterator.remove(); + } + } + + assertThat(list).contains("1", "4", "5", "3"); + + int iteration = 0; + for (Iterator iterator = list.iterator(); iterator.hasNext();) { + iterator.next(); + iterator.remove(); + iteration++; + } + + Assert.assertEquals(4, iteration); + + Assert.assertEquals(0, list.size()); + Assert.assertTrue(list.isEmpty()); + } + + @Test + public void testIteratorSequence() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + for (int i = 0; i < 1000; i++) { + set.add(Integer.valueOf(i)); + } + + Queue setCopy = new PriorityQueue(); + for (int i = 0; i < 1000; i++) { + setCopy.add(Integer.valueOf(i)); + } + + checkIterator(set, setCopy); + } + + private void checkIterator(Queue set, Queue setCopy) { + for (Iterator iterator = set.iterator(); iterator.hasNext();) { + Integer value = iterator.next(); + if (!setCopy.remove(value)) { + Assert.fail(); + } + } + + Assert.assertEquals(0, setCopy.size()); + } + + @Test + public void testTrySetComparator() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + + boolean setRes = set.trySetComparator(Collections.reverseOrder()); + Assert.assertTrue(setRes); + Assert.assertTrue(set.add(1)); + Assert.assertTrue(set.add(2)); + Assert.assertTrue(set.add(3)); + Assert.assertTrue(set.add(4)); + Assert.assertTrue(set.add(5)); + assertThat(set).containsExactly(5, 4, 3, 2, 1); + + boolean setRes2 = set.trySetComparator(Collections.reverseOrder(Collections.reverseOrder())); + Assert.assertFalse(setRes2); + assertThat(set).containsExactly(5, 4, 3, 2, 1); + + set.clear(); + boolean setRes3 = set.trySetComparator(Collections.reverseOrder(Collections.reverseOrder())); + Assert.assertTrue(setRes3); + set.add(3); + set.add(1); + set.add(2); + assertThat(set).containsExactly(1, 2, 3); + } + + + @Test + public void testSort() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + Assert.assertTrue(set.add(2)); + Assert.assertTrue(set.add(3)); + Assert.assertTrue(set.add(1)); + Assert.assertTrue(set.add(4)); + Assert.assertTrue(set.add(10)); + Assert.assertTrue(set.add(-1)); + Assert.assertTrue(set.add(0)); + + assertThat(set).containsExactly(-1, 0, 1, 2, 3, 4, 10); + + Assert.assertEquals(-1, (int)set.peek()); + } + + @Test + public void testRemove() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + set.add(5); + set.add(3); + set.add(1); + set.add(2); + set.add(4); + set.add(1); + + Assert.assertFalse(set.remove(0)); + Assert.assertTrue(set.remove(3)); + Assert.assertTrue(set.remove(1)); + + assertThat(set).containsExactly(1, 2, 4, 5); + } + + @Test + public void testRetainAll() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + for (int i = 0; i < 200; i++) { + set.add(i); + } + + Assert.assertTrue(set.retainAll(Arrays.asList(1, 2))); + Assert.assertEquals(2, set.size()); + } + + @Test + public void testContainsAll() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + for (int i = 0; i < 200; i++) { + set.add(i); + } + + Assert.assertTrue(set.containsAll(Arrays.asList(30, 11))); + Assert.assertFalse(set.containsAll(Arrays.asList(30, 711, 11))); + } + + @Test + public void testToArray() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + set.add("1"); + set.add("4"); + set.add("2"); + set.add("5"); + set.add("3"); + + assertThat(set.toArray()).contains("1", "4", "2", "5", "3"); + + String[] strs = set.toArray(new String[0]); + assertThat(strs).contains("1", "4", "2", "5", "3"); + } + + @Test + public void testContains() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + + set.add(new TestObject("1", "2")); + set.add(new TestObject("1", "2")); + set.add(new TestObject("2", "3")); + set.add(new TestObject("3", "4")); + set.add(new TestObject("5", "6")); + + Assert.assertTrue(set.contains(new TestObject("2", "3"))); + Assert.assertTrue(set.contains(new TestObject("1", "2"))); + Assert.assertFalse(set.contains(new TestObject("1", "9"))); + } + + @Test + public void testDuplicates() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + + set.add(new TestObject("1", "2")); + set.add(new TestObject("2", "3")); + set.add(new TestObject("5", "6")); + set.add(new TestObject("1", "2")); + set.add(new TestObject("3", "4")); + + Assert.assertEquals(5, set.size()); + + assertThat(set).containsExactly(new TestObject("1", "2"), new TestObject("1", "2"), + new TestObject("2", "3"), new TestObject("3", "4"), new TestObject("5", "6")); + } + + @Test + public void testSize() { + RPriorityQueue set = redisson.getPriorityQueue("set"); + set.add(1); + set.add(2); + set.add(3); + set.add(3); + set.add(4); + set.add(5); + set.add(5); + + Assert.assertEquals(7, set.size()); + } + + +} diff --git a/redisson/src/test/java/org/redisson/RedissonReadWriteLockTest.java b/redisson/src/test/java/org/redisson/RedissonReadWriteLockTest.java index 65b4e2629..eb2429e49 100644 --- a/redisson/src/test/java/org/redisson/RedissonReadWriteLockTest.java +++ b/redisson/src/test/java/org/redisson/RedissonReadWriteLockTest.java @@ -1,9 +1,11 @@ package org.redisson; +import static org.assertj.core.api.Assertions.assertThat; + import java.security.SecureRandom; import java.util.Random; -import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; @@ -12,10 +14,34 @@ import org.junit.Test; import org.redisson.api.RLock; import org.redisson.api.RReadWriteLock; -import static org.assertj.core.api.Assertions.*; - public class RedissonReadWriteLockTest extends BaseConcurrentTest { + @Test + public void testWriteReadReentrancy() throws InterruptedException { + RReadWriteLock readWriteLock = redisson.getReadWriteLock("TEST"); + readWriteLock.writeLock().lock(); + + java.util.concurrent.locks.Lock rLock = readWriteLock.readLock(); + Assert.assertTrue(rLock.tryLock()); + + AtomicBoolean ref = new AtomicBoolean(); + Thread t1 = new Thread(() -> { + boolean success = readWriteLock.readLock().tryLock(); + ref.set(success); + }); + t1.start(); + t1.join(); + + Assert.assertFalse(ref.get()); + + readWriteLock.writeLock().unlock(); + Assert.assertFalse(readWriteLock.writeLock().tryLock()); + rLock.unlock(); + + Assert.assertTrue(readWriteLock.writeLock().tryLock()); + readWriteLock.writeLock().unlock(); + } + @Test public void testWriteLock() throws InterruptedException { final RReadWriteLock lock = redisson.getReadWriteLock("lock"); @@ -48,7 +74,7 @@ public class RedissonReadWriteLockTest extends BaseConcurrentTest { t.join(50); writeLock.unlock(); - Assert.assertFalse(lock.readLock().tryLock()); + Assert.assertTrue(lock.readLock().tryLock()); Assert.assertTrue(writeLock.isHeldByCurrentThread()); writeLock.unlock(); Thread.sleep(1000); diff --git a/redisson/src/test/java/org/redisson/RedissonRedLockTest.java b/redisson/src/test/java/org/redisson/RedissonRedLockTest.java index 1791baf13..b15180870 100644 --- a/redisson/src/test/java/org/redisson/RedissonRedLockTest.java +++ b/redisson/src/test/java/org/redisson/RedissonRedLockTest.java @@ -63,6 +63,9 @@ public class RedissonRedLockTest { assertThat(executor.awaitTermination(2, TimeUnit.MINUTES)).isTrue(); assertThat(counter.get()).isEqualTo(50); + client1.shutdown(); + client2.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); assertThat(redis2.stop()).isEqualTo(0); } @@ -106,6 +109,9 @@ public class RedissonRedLockTest { assertThat(executor.awaitTermination(2, TimeUnit.MINUTES)).isTrue(); assertThat(counter.get()).isEqualTo(50); + client1.shutdown(); + client2.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); assertThat(redis2.stop()).isEqualTo(0); } @@ -144,6 +150,9 @@ public class RedissonRedLockTest { RedissonMultiLock lock = new RedissonRedLock(lock1, lock2, lock3); Assert.assertFalse(lock.tryLock()); + client1.shutdown(); + client2.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); assertThat(redis2.stop()).isEqualTo(0); } @@ -190,6 +199,9 @@ public class RedissonRedLockTest { lock.lock(); lock.unlock(); + client1.shutdown(); + client2.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); assertThat(redis2.stop()).isEqualTo(0); } @@ -228,6 +240,9 @@ public class RedissonRedLockTest { lock.lock(); lock.unlock(); + client1.shutdown(); + client2.shutdown(); + assertThat(redis1.stop()).isEqualTo(0); } @@ -264,6 +279,7 @@ public class RedissonRedLockTest { lock.lock(); lock.unlock(); + client.shutdown(); assertThat(redis1.stop()).isEqualTo(0); } diff --git a/redisson/src/test/java/org/redisson/RedissonReferenceReactiveTest.java b/redisson/src/test/java/org/redisson/RedissonReferenceReactiveTest.java index 08653322e..5b15242fa 100644 --- a/redisson/src/test/java/org/redisson/RedissonReferenceReactiveTest.java +++ b/redisson/src/test/java/org/redisson/RedissonReferenceReactiveTest.java @@ -7,6 +7,7 @@ import org.redisson.api.RBatch; import org.redisson.api.RBatchReactive; import org.redisson.api.RBucket; import org.redisson.api.RBucketReactive; +import org.redisson.api.RedissonClient; import org.redisson.reactive.RedissonBucketReactive; import org.redisson.reactive.RedissonMapCacheReactive; @@ -64,7 +65,8 @@ public class RedissonReferenceReactiveTest extends BaseReactiveTest { b3.set(b1); sync(batch.execute()); - RBatch b = Redisson.create(redisson.getConfig()).createBatch(); + RedissonClient lredisson = Redisson.create(redisson.getConfig()); + RBatch b = lredisson.createBatch(); b.getBucket("b1").getAsync(); b.getBucket("b2").getAsync(); b.getBucket("b3").getAsync(); @@ -72,5 +74,7 @@ public class RedissonReferenceReactiveTest extends BaseReactiveTest { assertEquals("b2", result.get(0).getName()); assertEquals("b3", result.get(1).getName()); assertEquals("b1", result.get(2).getName()); + + lredisson.shutdown(); } } diff --git a/redisson/src/test/java/org/redisson/RedissonRemoteServiceTest.java b/redisson/src/test/java/org/redisson/RedissonRemoteServiceTest.java index 7724ae26d..e80c18ffa 100644 --- a/redisson/src/test/java/org/redisson/RedissonRemoteServiceTest.java +++ b/redisson/src/test/java/org/redisson/RedissonRemoteServiceTest.java @@ -12,9 +12,12 @@ import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import static org.assertj.core.api.Assertions.assertThat; import org.junit.Assert; import org.junit.Test; +import static org.redisson.BaseTest.createConfig; +import static org.redisson.BaseTest.createInstance; import org.redisson.api.RFuture; import org.redisson.api.RedissonClient; import org.redisson.api.RemoteInvocationOptions; @@ -119,6 +122,14 @@ public class RedissonRemoteServiceTest extends BaseTest { Pojo doSomethingWithPojo(Pojo pojo); SerializablePojo doSomethingWithSerializablePojo(SerializablePojo pojo); + + String methodOverload(); + + String methodOverload(String str); + + String methodOverload(Long lng); + + String methodOverload(String str, Long lng); } @@ -183,6 +194,27 @@ public class RedissonRemoteServiceTest extends BaseTest { public SerializablePojo doSomethingWithSerializablePojo(SerializablePojo pojo) { return pojo; } + + @Override + public String methodOverload() { + return "methodOverload()"; + } + + @Override + public String methodOverload(Long lng) { + return "methodOverload(Long lng)"; + } + + @Override + public String methodOverload(String str) { + return "methodOverload(String str)"; + } + + @Override + public String methodOverload(String str, Long lng) { + return "methodOverload(String str, Long lng)"; + } + } @Test @@ -694,4 +726,21 @@ public class RedissonRemoteServiceTest extends BaseTest { server.shutdown(); } } + + @Test + public void testMethodOverload() { + RedissonClient r1 = createInstance(); + r1.getRemoteService().register(RemoteInterface.class, new RemoteImpl()); + + RedissonClient r2 = createInstance(); + RemoteInterface ri = r2.getRemoteService().get(RemoteInterface.class); + + assertThat(ri.methodOverload()).isEqualTo("methodOverload()"); + assertThat(ri.methodOverload(1l)).isEqualTo("methodOverload(Long lng)"); + assertThat(ri.methodOverload("")).isEqualTo("methodOverload(String str)"); + assertThat(ri.methodOverload("", 1l)).isEqualTo("methodOverload(String str, Long lng)"); + + r1.shutdown(); + r2.shutdown(); + } } diff --git a/redisson/src/test/java/org/redisson/RedissonScoredSortedSetReactiveTest.java b/redisson/src/test/java/org/redisson/RedissonScoredSortedSetReactiveTest.java index cc9c6605a..e295a9306 100644 --- a/redisson/src/test/java/org/redisson/RedissonScoredSortedSetReactiveTest.java +++ b/redisson/src/test/java/org/redisson/RedissonScoredSortedSetReactiveTest.java @@ -86,9 +86,9 @@ public class RedissonScoredSortedSetReactiveTest extends BaseReactiveTest { @Test public void testRemoveAsync() throws InterruptedException, ExecutionException { RScoredSortedSetReactive set = redisson.getScoredSortedSet("simple"); - set.add(0.11, 1); - set.add(0.22, 3); - set.add(0.33, 7); + sync(set.add(0.11, 1)); + sync(set.add(0.22, 3)); + sync(set.add(0.33, 7)); Assert.assertTrue(sync(set.remove(1))); Assert.assertFalse(sync(set.contains(1))); diff --git a/redisson/src/test/java/org/redisson/RedissonScoredSortedSetTest.java b/redisson/src/test/java/org/redisson/RedissonScoredSortedSetTest.java index 181f431da..0d9317848 100644 --- a/redisson/src/test/java/org/redisson/RedissonScoredSortedSetTest.java +++ b/redisson/src/test/java/org/redisson/RedissonScoredSortedSetTest.java @@ -18,12 +18,178 @@ import org.junit.Assume; import org.junit.Test; import org.redisson.api.RFuture; import org.redisson.api.RLexSortedSet; +import org.redisson.api.RList; import org.redisson.api.RScoredSortedSet; import org.redisson.api.RSortedSet; +import org.redisson.api.SortOrder; +import org.redisson.client.codec.IntegerCodec; +import org.redisson.client.codec.StringCodec; import org.redisson.client.protocol.ScoredEntry; public class RedissonScoredSortedSetTest extends BaseTest { + @Test + public void testSortOrder() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + Set descSort = set.readSort(SortOrder.DESC); + assertThat(descSort).containsExactly(3, 2, 1); + + Set ascSort = set.readSort(SortOrder.ASC); + assertThat(ascSort).containsExactly(1, 2, 3); + } + + @Test + public void testSortOrderLimit() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + Set descSort = set.readSort(SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly(2, 1); + + Set ascSort = set.readSort(SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly(2, 3); + } + + @Test + public void testSortOrderByPattern() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + Set descSort = set.readSort("test*", SortOrder.DESC); + assertThat(descSort).containsExactly(1, 2, 3); + + Set ascSort = set.readSort("test*", SortOrder.ASC); + assertThat(ascSort).containsExactly(3, 2, 1); + } + + @Test + public void testSortOrderByPatternLimit() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + Set descSort = set.readSort("test*", SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly(2, 3); + + Set ascSort = set.readSort("test*", SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly(2, 1); + } + + @Test + public void testSortOrderByPatternGet() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", StringCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(1); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(3); + + redisson.getBucket("tester1", StringCodec.INSTANCE).set("obj1"); + redisson.getBucket("tester2", StringCodec.INSTANCE).set("obj2"); + redisson.getBucket("tester3", StringCodec.INSTANCE).set("obj3"); + + Collection descSort = set.readSort("test*", Arrays.asList("tester*"), SortOrder.DESC); + assertThat(descSort).containsExactly("obj3", "obj2", "obj1"); + + Collection ascSort = set.readSort("test*", Arrays.asList("tester*"), SortOrder.ASC); + assertThat(ascSort).containsExactly("obj1", "obj2", "obj3"); + } + + @Test + public void testSortOrderByPatternGetLimit() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", StringCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(1); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(3); + + redisson.getBucket("tester1", StringCodec.INSTANCE).set("obj1"); + redisson.getBucket("tester2", StringCodec.INSTANCE).set("obj2"); + redisson.getBucket("tester3", StringCodec.INSTANCE).set("obj3"); + + Collection descSort = set.readSort("test*", Arrays.asList("tester*"), SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly("obj2", "obj1"); + + Collection ascSort = set.readSort("test*", Arrays.asList("tester*"), SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly("obj2", "obj3"); + } + + @Test + public void testSortTo() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + assertThat(set.sortTo("test3", SortOrder.DESC)).isEqualTo(3); + RList list2 = redisson.getList("test3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("3", "2", "1"); + + assertThat(set.sortTo("test4", SortOrder.ASC)).isEqualTo(3); + RList list3 = redisson.getList("test4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("1", "2", "3"); + + } + + @Test + public void testSortToLimit() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + assertThat(set.sortTo("test3", SortOrder.DESC, 1, 2)).isEqualTo(2); + RList list2 = redisson.getList("test3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("2", "1"); + + assertThat(set.sortTo("test4", SortOrder.ASC, 1, 2)).isEqualTo(2); + RList list3 = redisson.getList("test4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("2", "3"); + } + + @Test + public void testSortToByPattern() { + RScoredSortedSet set = redisson.getScoredSortedSet("list", IntegerCodec.INSTANCE); + set.add(10, 1); + set.add(9, 2); + set.add(8, 3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + assertThat(set.sortTo("tester3", "test*", SortOrder.DESC, 1, 2)).isEqualTo(2); + RList list2 = redisson.getList("tester3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("2", "3"); + + assertThat(set.sortTo("tester4", "test*", SortOrder.ASC, 1, 2)).isEqualTo(2); + RList list3 = redisson.getList("tester4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("2", "1"); + } + + @Test public void testCount() { RScoredSortedSet set = redisson.getScoredSortedSet("simple"); diff --git a/redisson/src/test/java/org/redisson/RedissonSemaphoreTest.java b/redisson/src/test/java/org/redisson/RedissonSemaphoreTest.java index f5c6f2f01..9994de384 100644 --- a/redisson/src/test/java/org/redisson/RedissonSemaphoreTest.java +++ b/redisson/src/test/java/org/redisson/RedissonSemaphoreTest.java @@ -13,6 +13,14 @@ import org.redisson.api.RSemaphore; public class RedissonSemaphoreTest extends BaseConcurrentTest { + @Test + public void testAcquireWithoutSetPermits() throws InterruptedException { + RSemaphore s = redisson.getSemaphore("test"); + s.release(); + s.release(); + s.acquire(2); + } + @Test public void testTrySetPermits() { RSemaphore s = redisson.getSemaphore("test"); diff --git a/redisson/src/test/java/org/redisson/RedissonSetCacheReactiveTest.java b/redisson/src/test/java/org/redisson/RedissonSetCacheReactiveTest.java index 37b3372f4..6fd89186f 100644 --- a/redisson/src/test/java/org/redisson/RedissonSetCacheReactiveTest.java +++ b/redisson/src/test/java/org/redisson/RedissonSetCacheReactiveTest.java @@ -75,10 +75,10 @@ public class RedissonSetCacheReactiveTest extends BaseReactiveTest { assertThat(sync(set.add("123", 1, TimeUnit.SECONDS))).isFalse(); - Thread.sleep(50); + Thread.sleep(800); assertThat(sync(set.contains("123"))).isTrue(); - Thread.sleep(150); + Thread.sleep(250); assertThat(sync(set.contains("123"))).isFalse(); } @@ -104,12 +104,15 @@ public class RedissonSetCacheReactiveTest extends BaseReactiveTest { } @Test - public void testIteratorSequence() { + public void testIteratorSequence() throws InterruptedException { RSetCacheReactive set = redisson.getSetCache("set"); for (int i = 0; i < 1000; i++) { - sync(set.add(Long.valueOf(i))); + set.add(Long.valueOf(i)); } + Thread.sleep(1000); + assertThat(sync(set.size())).isEqualTo(1000); + Set setCopy = new HashSet(); for (int i = 0; i < 1000; i++) { setCopy.add(Long.valueOf(i)); diff --git a/redisson/src/test/java/org/redisson/RedissonSetCacheTest.java b/redisson/src/test/java/org/redisson/RedissonSetCacheTest.java index f4732b656..8548f356e 100644 --- a/redisson/src/test/java/org/redisson/RedissonSetCacheTest.java +++ b/redisson/src/test/java/org/redisson/RedissonSetCacheTest.java @@ -289,7 +289,7 @@ public class RedissonSetCacheTest extends BaseTest { set.add("5"); set.add("3"); - Thread.sleep(1000); + Thread.sleep(1500); assertThat(set.toArray()).containsOnly("1", "4", "5", "3"); diff --git a/redisson/src/test/java/org/redisson/RedissonSetTest.java b/redisson/src/test/java/org/redisson/RedissonSetTest.java index 5a26005cd..e0779d3ac 100644 --- a/redisson/src/test/java/org/redisson/RedissonSetTest.java +++ b/redisson/src/test/java/org/redisson/RedissonSetTest.java @@ -4,6 +4,7 @@ import static org.assertj.core.api.Assertions.assertThat; import java.io.Serializable; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; import java.util.HashSet; import java.util.Iterator; @@ -13,7 +14,11 @@ import java.util.concurrent.ExecutionException; import org.junit.Assert; import org.junit.Test; import org.redisson.api.RFuture; +import org.redisson.api.RList; import org.redisson.api.RSet; +import org.redisson.api.SortOrder; +import org.redisson.client.codec.IntegerCodec; +import org.redisson.client.codec.StringCodec; public class RedissonSetTest extends BaseTest { @@ -31,6 +36,168 @@ public class RedissonSetTest extends BaseTest { } + @Test + public void testSortOrder() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + Set descSort = list.readSort(SortOrder.DESC); + assertThat(descSort).containsExactly(3, 2, 1); + + Set ascSort = list.readSort(SortOrder.ASC); + assertThat(ascSort).containsExactly(1, 2, 3); + } + + @Test + public void testSortOrderLimit() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + Set descSort = list.readSort(SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly(2, 1); + + Set ascSort = list.readSort(SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly(2, 3); + } + + @Test + public void testSortOrderByPattern() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + Set descSort = list.readSort("test*", SortOrder.DESC); + assertThat(descSort).containsExactly(1, 2, 3); + + Set ascSort = list.readSort("test*", SortOrder.ASC); + assertThat(ascSort).containsExactly(3, 2, 1); + } + + @Test + public void testSortOrderByPatternLimit() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + Set descSort = list.readSort("test*", SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly(2, 3); + + Set ascSort = list.readSort("test*", SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly(2, 1); + } + + @Test + public void testSortOrderByPatternGet() { + RSet list = redisson.getSet("list", StringCodec.INSTANCE); + list.add("1"); + list.add("2"); + list.add("3"); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(1); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(3); + + redisson.getBucket("tester1", StringCodec.INSTANCE).set("obj1"); + redisson.getBucket("tester2", StringCodec.INSTANCE).set("obj2"); + redisson.getBucket("tester3", StringCodec.INSTANCE).set("obj3"); + + Collection descSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.DESC); + assertThat(descSort).containsExactly("obj3", "obj2", "obj1"); + + Collection ascSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.ASC); + assertThat(ascSort).containsExactly("obj1", "obj2", "obj3"); + } + + @Test + public void testSortOrderByPatternGetLimit() { + RSet list = redisson.getSet("list", StringCodec.INSTANCE); + list.add("1"); + list.add("2"); + list.add("3"); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(1); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(3); + + redisson.getBucket("tester1", StringCodec.INSTANCE).set("obj1"); + redisson.getBucket("tester2", StringCodec.INSTANCE).set("obj2"); + redisson.getBucket("tester3", StringCodec.INSTANCE).set("obj3"); + + Collection descSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.DESC, 1, 2); + assertThat(descSort).containsExactly("obj2", "obj1"); + + Collection ascSort = list.readSort("test*", Arrays.asList("tester*"), SortOrder.ASC, 1, 2); + assertThat(ascSort).containsExactly("obj2", "obj3"); + } + + @Test + public void testSortTo() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add("1"); + list.add("2"); + list.add("3"); + + assertThat(list.sortTo("test3", SortOrder.DESC)).isEqualTo(3); + RList list2 = redisson.getList("test3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("3", "2", "1"); + + assertThat(list.sortTo("test4", SortOrder.ASC)).isEqualTo(3); + RList list3 = redisson.getList("test4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("1", "2", "3"); + + } + + @Test + public void testSortToLimit() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + assertThat(list.sortTo("test3", SortOrder.DESC, 1, 2)).isEqualTo(2); + RList list2 = redisson.getList("test3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("2", "1"); + + assertThat(list.sortTo("test4", SortOrder.ASC, 1, 2)).isEqualTo(2); + RList list3 = redisson.getList("test4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("2", "3"); + } + + @Test + public void testSortToByPattern() { + RSet list = redisson.getSet("list", IntegerCodec.INSTANCE); + list.add(1); + list.add(2); + list.add(3); + + redisson.getBucket("test1", IntegerCodec.INSTANCE).set(3); + redisson.getBucket("test2", IntegerCodec.INSTANCE).set(2); + redisson.getBucket("test3", IntegerCodec.INSTANCE).set(1); + + assertThat(list.sortTo("tester3", "test*", SortOrder.DESC, 1, 2)).isEqualTo(2); + RList list2 = redisson.getList("tester3", StringCodec.INSTANCE); + assertThat(list2).containsExactly("2", "3"); + + assertThat(list.sortTo("tester4", "test*", SortOrder.ASC, 1, 2)).isEqualTo(2); + RList list3 = redisson.getList("tester4", StringCodec.INSTANCE); + assertThat(list3).containsExactly("2", "1"); + } + + @Test public void testRemoveRandom() { RSet set = redisson.getSet("simple"); @@ -43,6 +210,23 @@ public class RedissonSetTest extends BaseTest { assertThat(set.removeRandom()).isIn(1, 2, 3); assertThat(set.removeRandom()).isNull(); } + + @Test + public void testRemoveRandomAmount() { + RSet set = redisson.getSet("simple"); + set.add(1); + set.add(2); + set.add(3); + set.add(4); + set.add(5); + set.add(6); + + assertThat(set.removeRandom(3)).isSubsetOf(1, 2, 3, 4, 5, 6).hasSize(3); + assertThat(set.removeRandom(2)).isSubsetOf(1, 2, 3, 4, 5, 6).hasSize(2); + assertThat(set.removeRandom(1)).isSubsetOf(1, 2, 3, 4, 5, 6).hasSize(1); + assertThat(set.removeRandom(4)).isEmpty(); + } + @Test public void testRandom() { diff --git a/redisson/src/test/java/org/redisson/RedissonSortedSetTest.java b/redisson/src/test/java/org/redisson/RedissonSortedSetTest.java index c601f055f..255550c94 100644 --- a/redisson/src/test/java/org/redisson/RedissonSortedSetTest.java +++ b/redisson/src/test/java/org/redisson/RedissonSortedSetTest.java @@ -17,6 +17,17 @@ import org.redisson.api.RSortedSet; public class RedissonSortedSetTest extends BaseTest { + @Test + public void testReadAll() { + RSortedSet set = redisson.getSortedSet("simple"); + set.add(2); + set.add(0); + set.add(1); + set.add(5); + + assertThat(set.readAll()).containsExactly(0, 1, 2, 5); + } + @Test public void testAddAsync() throws InterruptedException, ExecutionException { RSortedSet set = redisson.getSortedSet("simple"); diff --git a/redisson/src/test/java/org/redisson/RedissonTest.java b/redisson/src/test/java/org/redisson/RedissonTest.java index 2de3ca998..2e611e44d 100644 --- a/redisson/src/test/java/org/redisson/RedissonTest.java +++ b/redisson/src/test/java/org/redisson/RedissonTest.java @@ -1,16 +1,22 @@ package org.redisson; +import static com.jayway.awaitility.Awaitility.await; +import static org.assertj.core.api.Assertions.assertThat; +import static org.redisson.BaseTest.createInstance; + import java.io.IOException; import java.net.InetSocketAddress; import java.util.Collections; import java.util.Iterator; import java.util.Map; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicInteger; + import org.junit.After; import org.junit.AfterClass; - import org.junit.Assert; import org.junit.Assume; import org.junit.Before; @@ -19,7 +25,9 @@ import org.junit.Test; import org.redisson.RedisRunner.RedisProcess; import org.redisson.api.ClusterNode; import org.redisson.api.Node; +import org.redisson.api.Node.InfoSection; import org.redisson.api.NodesGroup; +import org.redisson.api.RMap; import org.redisson.api.RedissonClient; import org.redisson.client.RedisConnectionException; import org.redisson.client.RedisException; @@ -29,15 +37,44 @@ import org.redisson.codec.SerializationCodec; import org.redisson.config.Config; import org.redisson.connection.ConnectionListener; -import static com.jayway.awaitility.Awaitility.await; -import static org.assertj.core.api.Assertions.assertThat; -import static org.redisson.BaseTest.createInstance; - public class RedissonTest { protected RedissonClient redisson; protected static RedissonClient defaultRedisson; + + @Test + public void testSmallPool() throws InterruptedException { + Config config = new Config(); + config.useSingleServer() + .setConnectionMinimumIdleSize(3) + .setConnectionPoolSize(3) + .setAddress(RedisRunner.getDefaultRedisServerBindAddressAndPort()); + RedissonClient localRedisson = Redisson.create(config); + + RMap map = localRedisson.getMap("test"); + + ExecutorService executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors()*2); + long start = System.currentTimeMillis(); + int iterations = 500_000; + for (int i = 0; i < iterations; i++) { + final int j = i; + executor.execute(new Runnable() { + @Override + public void run() { + map.put("" + j, "" + j); + } + }); + } + + executor.shutdown(); + Assert.assertTrue(executor.awaitTermination(10, TimeUnit.MINUTES)); + + assertThat(map.size()).isEqualTo(iterations); + + localRedisson.shutdown(); + } + @Test public void testIterator() { RedissonBaseIterator iter = new RedissonBaseIterator() { @@ -121,13 +158,15 @@ public class RedissonTest { Config config = new Config(); config.useSingleServer().setAddress(p.getRedisServerAddressAndPort()).setTimeout(100000); + RedissonClient r = null; try { - RedissonClient r = Redisson.create(config); + r = Redisson.create(config); r.getKeys().flushall(); for (int i = 0; i < 10000; i++) { r.getMap("test").put("" + i, "" + i); } } finally { + r.shutdown(); p.stop(); } } @@ -139,13 +178,15 @@ public class RedissonTest { Config config = new Config(); config.useSingleServer().setAddress(p.getRedisServerAddressAndPort()).setTimeout(100000); + RedissonClient r = null; try { - RedissonClient r = Redisson.create(config); + r = Redisson.create(config); r.getKeys().flushall(); for (int i = 0; i < 10000; i++) { r.getMap("test").fastPut("" + i, "" + i); } } finally { + r.shutdown(); p.stop(); } } @@ -226,6 +267,53 @@ public class RedissonTest { Assert.assertTrue(r.isShutdown()); } + @Test + public void testNode() { + Node node = redisson.getNodesGroup().getNode(RedisRunner.getDefaultRedisServerBindAddressAndPort()); + assertThat(node).isNotNull(); + } + + @Test + public void testInfo() { + Node node = redisson.getNodesGroup().getNodes().iterator().next(); + + Map allResponse = node.info(InfoSection.ALL); + assertThat(allResponse).containsKeys("redis_version", "connected_clients"); + + Map defaultResponse = node.info(InfoSection.DEFAULT); + assertThat(defaultResponse).containsKeys("redis_version", "connected_clients"); + + Map serverResponse = node.info(InfoSection.SERVER); + assertThat(serverResponse).containsKey("redis_version"); + + Map clientsResponse = node.info(InfoSection.CLIENTS); + assertThat(clientsResponse).containsKey("connected_clients"); + + Map memoryResponse = node.info(InfoSection.MEMORY); + assertThat(memoryResponse).containsKey("used_memory_human"); + + Map persistenceResponse = node.info(InfoSection.PERSISTENCE); + assertThat(persistenceResponse).containsKey("rdb_last_save_time"); + + Map statsResponse = node.info(InfoSection.STATS); + assertThat(statsResponse).containsKey("pubsub_patterns"); + + Map replicationResponse = node.info(InfoSection.REPLICATION); + assertThat(replicationResponse).containsKey("repl_backlog_first_byte_offset"); + + Map cpuResponse = node.info(InfoSection.CPU); + assertThat(cpuResponse).containsKey("used_cpu_sys"); + + Map commandStatsResponse = node.info(InfoSection.COMMANDSTATS); + assertThat(commandStatsResponse).containsKey("cmdstat_flushall"); + + Map clusterResponse = node.info(InfoSection.CLUSTER); + assertThat(clusterResponse).containsKey("cluster_enabled"); + + Map keyspaceResponse = node.info(InfoSection.KEYSPACE); + assertThat(keyspaceResponse).isEmpty(); + } + @Test public void testTime() { NodesGroup nodes = redisson.getNodesGroup(); @@ -271,23 +359,40 @@ public class RedissonTest { } @Test - public void testSingleConfig() throws IOException { + public void testSingleConfigJSON() throws IOException { RedissonClient r = BaseTest.createInstance(); String t = r.getConfig().toJSON(); Config c = Config.fromJSON(t); assertThat(c.toJSON()).isEqualTo(t); } + + @Test + public void testSingleConfigYAML() throws IOException { + RedissonClient r = BaseTest.createInstance(); + String t = r.getConfig().toYAML(); + Config c = Config.fromYAML(t); + assertThat(c.toYAML()).isEqualTo(t); + } + @Test - public void testMasterSlaveConfig() throws IOException { + public void testMasterSlaveConfigJSON() throws IOException { Config c2 = new Config(); c2.useMasterSlaveServers().setMasterAddress("123.1.1.1:1231").addSlaveAddress("82.12.47.12:1028"); - String t = c2.toJSON(); Config c = Config.fromJSON(t); assertThat(c.toJSON()).isEqualTo(t); } + @Test + public void testMasterSlaveConfigYAML() throws IOException { + Config c2 = new Config(); + c2.useMasterSlaveServers().setMasterAddress("123.1.1.1:1231").addSlaveAddress("82.12.47.12:1028"); + String t = c2.toYAML(); + Config c = Config.fromYAML(t); + assertThat(c.toYAML()).isEqualTo(t); + } + // @Test public void testCluster() { NodesGroup nodes = redisson.getClusterNodesGroup(); @@ -329,6 +434,15 @@ public class RedissonTest { Thread.sleep(1500); } + @Test(expected = RedisConnectionException.class) + public void testReplicatedConnectionFail() throws InterruptedException { + Config config = new Config(); + config.useReplicatedServers().addNodeAddress("127.99.0.1:1111"); + Redisson.create(config); + + Thread.sleep(1500); + } + @Test(expected = RedisConnectionException.class) public void testMasterSlaveConnectionFail() throws InterruptedException { Config config = new Config(); diff --git a/redisson/src/test/java/org/redisson/RedissonTopicPatternTest.java b/redisson/src/test/java/org/redisson/RedissonTopicPatternTest.java index 65533b905..e13e42e5d 100644 --- a/redisson/src/test/java/org/redisson/RedissonTopicPatternTest.java +++ b/redisson/src/test/java/org/redisson/RedissonTopicPatternTest.java @@ -305,6 +305,7 @@ public class RedissonTopicPatternTest { await().atMost(5, TimeUnit.SECONDS).untilTrue(executed); + redisson.shutdown(); runner.stop(); } diff --git a/redisson/src/test/java/org/redisson/RedissonTopicTest.java b/redisson/src/test/java/org/redisson/RedissonTopicTest.java index ca5848001..3270fa6bf 100644 --- a/redisson/src/test/java/org/redisson/RedissonTopicTest.java +++ b/redisson/src/test/java/org/redisson/RedissonTopicTest.java @@ -318,6 +318,43 @@ public class RedissonTopicTest { redisson.shutdown(); } + @Test + public void testRemoveAllListeners() throws InterruptedException { + RedissonClient redisson = BaseTest.createInstance(); + RTopic topic1 = redisson.getTopic("topic1"); + for (int i = 0; i < 10; i++) { + topic1.addListener((channel, msg) -> { + Assert.fail(); + }); + } + + topic1 = redisson.getTopic("topic1"); + topic1.removeAllListeners(); + topic1.publish(new Message("123")); + + redisson.shutdown(); + } + + @Test + public void testRemoveByInstance() throws InterruptedException { + RedissonClient redisson = BaseTest.createInstance(); + RTopic topic1 = redisson.getTopic("topic1"); + MessageListener listener = new MessageListener() { + @Override + public void onMessage(String channel, Object msg) { + Assert.fail(); + } + }; + + topic1.addListener(listener); + + topic1 = redisson.getTopic("topic1"); + topic1.removeListener(listener); + topic1.publish(new Message("123")); + + redisson.shutdown(); + } + @Test public void testLazyUnsubscribe() throws InterruptedException { @@ -463,6 +500,7 @@ public class RedissonTopicTest { await().atMost(5, TimeUnit.SECONDS).untilTrue(executed); + redisson.shutdown(); runner.stop(); } diff --git a/redisson/src/test/java/org/redisson/TestObject.java b/redisson/src/test/java/org/redisson/TestObject.java index 083081a40..a14540f70 100644 --- a/redisson/src/test/java/org/redisson/TestObject.java +++ b/redisson/src/test/java/org/redisson/TestObject.java @@ -33,6 +33,40 @@ public class TestObject implements Comparable, Serializable { return res; } + @Override + public int hashCode() { + final int prime = 31; + int result = 1; + result = prime * result + ((name == null) ? 0 : name.hashCode()); + result = prime * result + ((value == null) ? 0 : value.hashCode()); + return result; + } + @Override + public boolean equals(Object obj) { + if (this == obj) + return true; + if (obj == null) + return false; + if (getClass() != obj.getClass()) + return false; + TestObject other = (TestObject) obj; + if (name == null) { + if (other.name != null) + return false; + } else if (!name.equals(other.name)) + return false; + if (value == null) { + if (other.value != null) + return false; + } else if (!value.equals(other.value)) + return false; + return true; + } + + @Override + public String toString() { + return "TestObject [name=" + name + ", value=" + value + "]"; + } } diff --git a/redisson/src/test/java/org/redisson/client/protocol/decoder/ClusterNodesDecoderTest.java b/redisson/src/test/java/org/redisson/client/protocol/decoder/ClusterNodesDecoderTest.java new file mode 100644 index 000000000..69fe85e60 --- /dev/null +++ b/redisson/src/test/java/org/redisson/client/protocol/decoder/ClusterNodesDecoderTest.java @@ -0,0 +1,35 @@ +package org.redisson.client.protocol.decoder; + +import java.io.IOException; +import java.util.List; + +import org.junit.Assert; +import org.junit.Test; +import org.redisson.cluster.ClusterNodeInfo; + +import io.netty.buffer.ByteBuf; +import io.netty.buffer.Unpooled; + +public class ClusterNodesDecoderTest { + + @Test + public void test() throws IOException { + ClusterNodesDecoder decoder = new ClusterNodesDecoder(); + ByteBuf buf = Unpooled.buffer(); + + String info = "7af253f8c20a3b3fbd481801bd361ec6643c6f0b 192.168.234.129:7001@17001 master - 0 1478865073260 8 connected 5461-10922\n" + + "a0d6a300f9f3b139c89cf45b75dbb7e4a01bb6b5 192.168.234.131:7005@17005 slave 5b00efb410f14ba5bb0a153c057e431d9ee4562e 0 1478865072251 5 connected\n" + + "454b8aaab7d8687822923da37a91fc0eecbe7a88 192.168.234.130:7002@17002 slave 7af253f8c20a3b3fbd481801bd361ec6643c6f0b 0 1478865072755 8 connected\n" + + "5b00efb410f14ba5bb0a153c057e431d9ee4562e 192.168.234.131:7004@17004 master - 0 1478865071746 5 connected 10923-16383\n" + + "14edcdebea55853533a24d5cdc560ecc06ec5295 192.168.234.130:7003@17003 myself,master - 0 0 7 connected 0-5460\n" + + "58d9f7c6d801aeebaf0e04e1aacb991e7e0ca8ff 192.168.234.129:7000@17000 slave 14edcdebea55853533a24d5cdc560ecc06ec5295 0 1478865071241 7 connected\n"; + + byte[] src = info.getBytes(); + buf.writeBytes(src); + List nodes = decoder.decode(buf, null); + ClusterNodeInfo node = nodes.get(0); + Assert.assertEquals("192.168.234.129", node.getAddress().getHost()); + Assert.assertEquals(7001, node.getAddress().getPort()); + } + +} diff --git a/redisson/src/test/java/org/redisson/executor/RedissonScheduledExecutorServiceTest.java b/redisson/src/test/java/org/redisson/executor/RedissonScheduledExecutorServiceTest.java index 352ea5d1d..30c9d4fb4 100644 --- a/redisson/src/test/java/org/redisson/executor/RedissonScheduledExecutorServiceTest.java +++ b/redisson/src/test/java/org/redisson/executor/RedissonScheduledExecutorServiceTest.java @@ -15,9 +15,9 @@ import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import org.redisson.BaseTest; -import org.redisson.CronSchedule; import org.redisson.RedissonNode; import org.redisson.RedissonRuntimeEnvironment; +import org.redisson.api.CronSchedule; import org.redisson.api.RScheduledExecutorService; import org.redisson.api.RScheduledFuture; import org.redisson.config.Config; diff --git a/redisson/src/test/java/org/redisson/jcache/JCacheTest.java b/redisson/src/test/java/org/redisson/jcache/JCacheTest.java new file mode 100644 index 000000000..beefd07d8 --- /dev/null +++ b/redisson/src/test/java/org/redisson/jcache/JCacheTest.java @@ -0,0 +1,133 @@ +package org.redisson.jcache; + +import static org.assertj.core.api.Assertions.assertThat; + +import java.io.IOException; +import java.io.Serializable; +import java.net.URI; +import java.net.URISyntaxException; +import java.net.URL; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; + +import javax.cache.Cache; +import javax.cache.Caching; +import javax.cache.configuration.Configuration; +import javax.cache.configuration.FactoryBuilder; +import javax.cache.configuration.MutableCacheEntryListenerConfiguration; +import javax.cache.configuration.MutableConfiguration; +import javax.cache.event.CacheEntryEvent; +import javax.cache.event.CacheEntryExpiredListener; +import javax.cache.event.CacheEntryListenerException; +import javax.cache.expiry.CreatedExpiryPolicy; +import javax.cache.expiry.Duration; + +import org.junit.Assert; +import org.junit.Test; +import org.redisson.BaseTest; +import org.redisson.RedisRunner; +import org.redisson.RedisRunner.FailedToStartRedisException; +import org.redisson.RedisRunner.RedisProcess; +import org.redisson.config.Config; +import org.redisson.jcache.configuration.RedissonConfiguration; + +public class JCacheTest extends BaseTest { + + @Test + public void testRedissonConfig() throws InterruptedException, IllegalArgumentException, URISyntaxException, IOException { + RedisProcess runner = new RedisRunner() + .nosave() + .randomDir() + .port(6311) + .run(); + + URL configUrl = getClass().getResource("redisson-jcache.json"); + Config cfg = Config.fromJSON(configUrl); + + Configuration config = RedissonConfiguration.fromConfig(cfg); + Cache cache = Caching.getCachingProvider().getCacheManager() + .createCache("test", config); + + cache.put("1", "2"); + Assert.assertEquals("2", cache.get("1")); + + cache.close(); + runner.stop(); + } + + @Test + public void testRedissonInstance() throws InterruptedException, IllegalArgumentException, URISyntaxException { + Configuration config = RedissonConfiguration.fromInstance(redisson); + Cache cache = Caching.getCachingProvider().getCacheManager() + .createCache("test", config); + + cache.put("1", "2"); + Assert.assertEquals("2", cache.get("1")); + + cache.close(); + } + + @Test + public void testExpiration() throws InterruptedException, IllegalArgumentException, URISyntaxException, FailedToStartRedisException, IOException { + RedisProcess runner = new RedisRunner() + .nosave() + .randomDir() + .port(6311) + .run(); + + MutableConfiguration config = new MutableConfiguration<>(); + config.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, 1))); + config.setStoreByValue(true); + + URI configUri = getClass().getResource("redisson-jcache.json").toURI(); + Cache cache = Caching.getCachingProvider().getCacheManager(configUri, null) + .createCache("test", config); + + CountDownLatch latch = new CountDownLatch(1); + + String key = "123"; + ExpiredListener clientListener = new ExpiredListener(latch, key, "90"); + MutableCacheEntryListenerConfiguration listenerConfiguration = + new MutableCacheEntryListenerConfiguration(FactoryBuilder.factoryOf(clientListener), null, true, true); + cache.registerCacheEntryListener(listenerConfiguration); + + cache.put(key, "90"); + Assert.assertNotNull(cache.get(key)); + + latch.await(); + + Assert.assertNull(cache.get(key)); + + cache.close(); + runner.stop(); + } + + public static class ExpiredListener implements CacheEntryExpiredListener, Serializable { + + private Object key; + private Object value; + private CountDownLatch latch; + + public ExpiredListener(CountDownLatch latch, Object key, Object value) { + super(); + this.latch = latch; + this.key = key; + this.value = value; + } + + + + @Override + public void onExpired(Iterable> events) + throws CacheEntryListenerException { + CacheEntryEvent entry = events.iterator().next(); + + assertThat(entry.getKey()).isEqualTo(key); + assertThat(entry.getValue()).isEqualTo(value); + latch.countDown(); + } + + + } + +} diff --git a/redisson/src/test/java/org/redisson/misc/LogHelperTest.java b/redisson/src/test/java/org/redisson/misc/LogHelperTest.java new file mode 100644 index 000000000..7fb99b12e --- /dev/null +++ b/redisson/src/test/java/org/redisson/misc/LogHelperTest.java @@ -0,0 +1,223 @@ +package org.redisson.misc; + +import static org.hamcrest.MatcherAssert.assertThat; +import static org.hamcrest.Matchers.is; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import org.junit.Test; + +/** + * @author Philipp Marx + */ +public class LogHelperTest { + + @Test + public void toStringWithNull() { + assertThat(LogHelper.toString(null), is("null")); + } + + @Test + public void toStringWithNestedPrimitives() { + Object[] input = new Object[] { "0", 1, 2L, 3.1D, 4.2F, (byte) 5, '6' }; + + assertThat(LogHelper.toString(input), is("[0, 1, 2, 3.1, 4.2, 5, 6]")); + } + + @Test + public void toStringWithPrimitive() { + assertThat(LogHelper.toString("0"), is("0")); + assertThat(LogHelper.toString(1), is("1")); + assertThat(LogHelper.toString(2L), is("2")); + assertThat(LogHelper.toString(3.1D), is("3.1")); + assertThat(LogHelper.toString(4.2F), is("4.2")); + assertThat(LogHelper.toString((byte) 5), is("5")); + assertThat(LogHelper.toString('6'), is("6")); + } + + @Test + public void toStringWithNestedSmallArrays() { + String[] strings = new String[] { "0" }; + int[] ints = new int[] { 1 }; + long[] longs = new long[] { 2L }; + double[] doubles = new double[] { 3.1D }; + float[] floats = new float[] { 4.2F }; + byte[] bytes = new byte[] { (byte) 5 }; + char[] chars = new char[] { '6' }; + + Object[] input = new Object[] { strings, ints, longs, doubles, floats, bytes, chars }; + + assertThat(LogHelper.toString(input), is("[[0], [1], [2], [3.1], [4.2], [5], [6]]")); + } + + @Test + public void toStringWithNestedSmallCollections() { + List strings = Arrays.asList("0" ); + List ints = Arrays.asList( 1 ); + List longs = Arrays.asList( 2L ); + List doubles = Arrays.asList( 3.1D ); + List floats = Arrays.asList( 4.2F ); + List bytes = Arrays.asList( (byte) 5 ); + List chars = Arrays.asList( '6' ); + + Object[] input = new Object[] { strings, ints, longs, doubles, floats, bytes, chars }; + + assertThat(LogHelper.toString(input), is("[[0], [1], [2], [3.1], [4.2], [5], [6]]")); + } + + @Test + public void toStringWithSmallArrays() { + String[] strings = new String[] { "0" }; + int[] ints = new int[] { 1 }; + long[] longs = new long[] { 2L }; + double[] doubles = new double[] { 3.1D }; + float[] floats = new float[] { 4.2F }; + byte[] bytes = new byte[] { (byte) 5 }; + char[] chars = new char[] { '6' }; + + assertThat(LogHelper.toString(strings), is("[0]")); + assertThat(LogHelper.toString(ints), is("[1]")); + assertThat(LogHelper.toString(longs), is("[2]")); + assertThat(LogHelper.toString(doubles), is("[3.1]")); + assertThat(LogHelper.toString(floats), is("[4.2]")); + assertThat(LogHelper.toString(bytes), is("[5]")); + assertThat(LogHelper.toString(chars), is("[6]")); + } + + @Test + public void toStringWithSmallCollections() { + List strings = Collections.nCopies(1, "0"); + List ints = Collections.nCopies(1, 1); + List longs = Collections.nCopies(1, 2L); + List doubles = Collections.nCopies(1, 3.1D); + List floats = Collections.nCopies(1, 4.2F); + List bytes = Collections.nCopies(1, (byte)5); + List chars = Collections.nCopies(1, '6'); + + assertThat(LogHelper.toString(strings), is("[0]")); + assertThat(LogHelper.toString(ints), is("[1]")); + assertThat(LogHelper.toString(longs), is("[2]")); + assertThat(LogHelper.toString(doubles), is("[3.1]")); + assertThat(LogHelper.toString(floats), is("[4.2]")); + assertThat(LogHelper.toString(bytes), is("[5]")); + assertThat(LogHelper.toString(chars), is("[6]")); + } + + @Test + public void toStringWithNestedBigArrays() { + String[] strings = new String[15]; + Arrays.fill(strings, "0"); + int[] ints = new int[15]; + Arrays.fill(ints, 1); + long[] longs = new long[15]; + Arrays.fill(longs, 2L); + double[] doubles = new double[15]; + Arrays.fill(doubles, 3.1D); + float[] floats = new float[15]; + Arrays.fill(floats, 4.2F); + byte[] bytes = new byte[15]; + Arrays.fill(bytes, (byte) 5); + char[] chars = new char[15]; + Arrays.fill(chars, '6'); + + Object[] input = new Object[] { strings, ints, longs, doubles, floats, bytes, chars }; + StringBuilder sb = new StringBuilder(); + sb.append("[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...], "); + sb.append("[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...], "); + sb.append("[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...], "); + sb.append("[3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, ...], "); + sb.append("[4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, ...], "); + sb.append("[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, ...], "); + sb.append("[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...]]"); + + assertThat(LogHelper.toString(input), is(sb.toString())); + } + + @Test + public void toStringWithNestedBigCollections() { + List strings = Collections.nCopies(15, "0"); + List ints = Collections.nCopies(15, 1); + List longs = Collections.nCopies(15, 2L); + List doubles = Collections.nCopies(15, 3.1D); + List floats = Collections.nCopies(15, 4.2F); + List bytes = Collections.nCopies(15, (byte)5); + List chars = Collections.nCopies(15, '6'); + + Object[] input = new Object[] { strings, ints, longs, doubles, floats, bytes, chars }; + StringBuilder sb = new StringBuilder(); + sb.append("[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...], "); + sb.append("[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...], "); + sb.append("[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...], "); + sb.append("[3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, ...], "); + sb.append("[4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, ...], "); + sb.append("[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, ...], "); + sb.append("[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...]]"); + + assertThat(LogHelper.toString(input), is(sb.toString())); + } + + @Test + public void toStringWithBigArrays() { + String[] strings = new String[15]; + Arrays.fill(strings, "0"); + int[] ints = new int[15]; + Arrays.fill(ints, 1); + long[] longs = new long[15]; + Arrays.fill(longs, 2L); + double[] doubles = new double[15]; + Arrays.fill(doubles, 3.1D); + float[] floats = new float[15]; + Arrays.fill(floats, 4.2F); + byte[] bytes = new byte[15]; + Arrays.fill(bytes, (byte) 5); + char[] chars = new char[15]; + Arrays.fill(chars, '6'); + + assertThat(LogHelper.toString(strings), is("[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...]")); + assertThat(LogHelper.toString(ints), is("[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]")); + assertThat(LogHelper.toString(longs), is("[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...]")); + assertThat(LogHelper.toString(doubles), is("[3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, ...]")); + assertThat(LogHelper.toString(floats), is("[4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, ...]")); + assertThat(LogHelper.toString(bytes), is("[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, ...]")); + assertThat(LogHelper.toString(chars), is("[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...]")); + } + + @Test + public void toStringWithBigCollections() { + List strings = Collections.nCopies(15, "0"); + List ints = Collections.nCopies(15, 1); + List longs = Collections.nCopies(15, 2L); + List doubles = Collections.nCopies(15, 3.1D); + List floats = Collections.nCopies(15, 4.2F); + List bytes = Collections.nCopies(15, (byte)5); + List chars = Collections.nCopies(15, '6'); + + assertThat(LogHelper.toString(strings), is("[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...]")); + assertThat(LogHelper.toString(ints), is("[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]")); + assertThat(LogHelper.toString(longs), is("[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...]")); + assertThat(LogHelper.toString(doubles), is("[3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, 3.1, ...]")); + assertThat(LogHelper.toString(floats), is("[4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, 4.2, ...]")); + assertThat(LogHelper.toString(bytes), is("[5, 5, 5, 5, 5, 5, 5, 5, 5, 5, ...]")); + assertThat(LogHelper.toString(chars), is("[6, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...]")); + } + + @Test + public void toStringWithSmallString() { + char[] charsForStr = new char[100]; + Arrays.fill(charsForStr, '7'); + String string = new String(charsForStr); + + assertThat(LogHelper.toString(string), is(string)); + } + + @Test + public void toStringWithBigString() { + char[] charsForStr = new char[150]; + Arrays.fill(charsForStr, '7'); + String string = new String(charsForStr); + + assertThat(LogHelper.toString(string), is(string.substring(0, 100) + "...")); + } +} diff --git a/redisson/src/test/java/org/redisson/spring/session/Config.java b/redisson/src/test/java/org/redisson/spring/session/Config.java new file mode 100644 index 000000000..d8994e597 --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/Config.java @@ -0,0 +1,21 @@ +package org.redisson.spring.session; + +import org.redisson.Redisson; +import org.redisson.api.RedissonClient; +import org.redisson.spring.session.config.EnableRedissonHttpSession; +import org.springframework.context.annotation.Bean; + +@EnableRedissonHttpSession +public class Config { + + @Bean + public RedissonClient redisson() { + return Redisson.create(); + } + + @Bean + public SessionEventsListener listener() { + return new SessionEventsListener(); + } + +} diff --git a/redisson/src/test/java/org/redisson/spring/session/ConfigTimeout.java b/redisson/src/test/java/org/redisson/spring/session/ConfigTimeout.java new file mode 100644 index 000000000..e18e93867 --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/ConfigTimeout.java @@ -0,0 +1,21 @@ +package org.redisson.spring.session; + +import org.redisson.Redisson; +import org.redisson.api.RedissonClient; +import org.redisson.spring.session.config.EnableRedissonHttpSession; +import org.springframework.context.annotation.Bean; + +@EnableRedissonHttpSession(maxInactiveIntervalInSeconds = 5) +public class ConfigTimeout { + + @Bean + public RedissonClient redisson() { + return Redisson.create(); + } + + @Bean + public SessionEventsListener listener() { + return new SessionEventsListener(); + } + +} diff --git a/redisson/src/test/java/org/redisson/spring/session/Initializer.java b/redisson/src/test/java/org/redisson/spring/session/Initializer.java new file mode 100644 index 000000000..ec85970a9 --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/Initializer.java @@ -0,0 +1,13 @@ +package org.redisson.spring.session; + +import org.springframework.session.web.context.AbstractHttpSessionApplicationInitializer; + +public class Initializer extends AbstractHttpSessionApplicationInitializer { + + public static Class CONFIG_CLASS = Config.class; + + public Initializer() { + super(CONFIG_CLASS); + } + +} diff --git a/redisson/src/test/java/org/redisson/spring/session/RedissonSessionManagerTest.java b/redisson/src/test/java/org/redisson/spring/session/RedissonSessionManagerTest.java new file mode 100644 index 000000000..9f217cf2a --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/RedissonSessionManagerTest.java @@ -0,0 +1,239 @@ +package org.redisson.spring.session; + +import java.io.IOException; + +import org.apache.http.client.ClientProtocolException; +import org.apache.http.client.fluent.Executor; +import org.apache.http.client.fluent.Request; +import org.apache.http.cookie.Cookie; +import org.apache.http.impl.client.BasicCookieStore; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.BeforeClass; +import org.junit.Test; +import org.redisson.RedisRunner; +import org.redisson.RedisRunner.KEYSPACE_EVENTS_OPTIONS; +import org.redisson.RedissonRuntimeEnvironment; +import org.springframework.web.context.WebApplicationContext; +import org.springframework.web.context.support.WebApplicationContextUtils; + +public class RedissonSessionManagerTest { + + private static RedisRunner.RedisProcess defaultRedisInstance; + + @AfterClass + public static void afterClass() throws IOException, InterruptedException { + if (!RedissonRuntimeEnvironment.isTravis) { + defaultRedisInstance.stop(); + } + } + + @BeforeClass + public static void beforeClass() throws IOException, InterruptedException { + if (!RedissonRuntimeEnvironment.isTravis) { + defaultRedisInstance = new RedisRunner() + .nosave() + .port(6379) + .randomDir() + .notifyKeyspaceEvents(KEYSPACE_EVENTS_OPTIONS.E, + KEYSPACE_EVENTS_OPTIONS.x, + KEYSPACE_EVENTS_OPTIONS.g) + .run(); + + } + } + + @Test + public void testSwitchServer() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + + Executor.closeIdleConnections(); + server.stop(); + + server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + + @Test + public void testWriteReadRemove() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1234"); + read(executor, "test", "1234"); + remove(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testRecreate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + recreate(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testUpdate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + + Executor executor = Executor.newInstance(); + + write(executor, "test", "1"); + read(executor, "test", "1"); + write(executor, "test", "2"); + read(executor, "test", "2"); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testExpire() throws Exception { + Initializer.CONFIG_CLASS = ConfigTimeout.class; + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + WebApplicationContext wa = WebApplicationContextUtils.getRequiredWebApplicationContext(server.getServletContext()); + SessionEventsListener listener = wa.getBean(SessionEventsListener.class); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + + Assert.assertEquals(1, listener.getSessionCreatedEvents()); + Assert.assertEquals(0, listener.getSessionExpiredEvents()); + + Executor.closeIdleConnections(); + + Thread.sleep(6000); + + Assert.assertEquals(1, listener.getSessionCreatedEvents()); + Assert.assertEquals(1, listener.getSessionExpiredEvents()); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "null"); + + Assert.assertEquals(2, listener.getSessionCreatedEvents()); + + write(executor, "test", "1234"); + Thread.sleep(3000); + read(executor, "test", "1234"); + Thread.sleep(3000); + Assert.assertEquals(1, listener.getSessionExpiredEvents()); + Thread.sleep(1000); + Assert.assertEquals(1, listener.getSessionExpiredEvents()); + Thread.sleep(3000); + Assert.assertEquals(2, listener.getSessionExpiredEvents()); + + Executor.closeIdleConnections(); + server.stop(); + } + + @Test + public void testInvalidate() throws Exception { + // start the server at http://localhost:8080/myapp + TomcatServer server = new TomcatServer("myapp", 8080, "src/test/"); + server.start(); + WebApplicationContext wa = WebApplicationContextUtils.getRequiredWebApplicationContext(server.getServletContext()); + SessionEventsListener listener = wa.getBean(SessionEventsListener.class); + + Executor executor = Executor.newInstance(); + BasicCookieStore cookieStore = new BasicCookieStore(); + executor.use(cookieStore); + + write(executor, "test", "1234"); + Cookie cookie = cookieStore.getCookies().get(0); + + Assert.assertEquals(1, listener.getSessionCreatedEvents()); + Assert.assertEquals(0, listener.getSessionDeletedEvents()); + + invalidate(executor); + + Assert.assertEquals(1, listener.getSessionCreatedEvents()); + Assert.assertEquals(1, listener.getSessionDeletedEvents()); + + Executor.closeIdleConnections(); + + executor = Executor.newInstance(); + cookieStore = new BasicCookieStore(); + cookieStore.addCookie(cookie); + executor.use(cookieStore); + read(executor, "test", "null"); + + Executor.closeIdleConnections(); + server.stop(); + } + + private void write(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/write?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void read(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/read?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void remove(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/remove?key=" + key; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals(value, response); + } + + private void invalidate(Executor executor) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/invalidate"; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + + private void recreate(Executor executor, String key, String value) throws IOException, ClientProtocolException { + String url = "http://localhost:8080/myapp/recreate?key=" + key + "&value=" + value; + String response = executor.execute(Request.Get(url)).returnContent().asString(); + Assert.assertEquals("OK", response); + } + +} diff --git a/redisson/src/test/java/org/redisson/spring/session/SessionEventsListener.java b/redisson/src/test/java/org/redisson/spring/session/SessionEventsListener.java new file mode 100644 index 000000000..549b7b89f --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/SessionEventsListener.java @@ -0,0 +1,40 @@ +package org.redisson.spring.session; + +import org.springframework.context.ApplicationListener; +import org.springframework.session.events.AbstractSessionEvent; +import org.springframework.session.events.SessionCreatedEvent; +import org.springframework.session.events.SessionDeletedEvent; +import org.springframework.session.events.SessionExpiredEvent; + +public class SessionEventsListener implements ApplicationListener { + + private int sessionCreatedEvents; + private int sessionDeletedEvents; + private int sessionExpiredEvents; + + @Override + public void onApplicationEvent(AbstractSessionEvent event) { + if (event instanceof SessionCreatedEvent) { + sessionCreatedEvents++; + } + if (event instanceof SessionDeletedEvent) { + sessionDeletedEvents++; + } + if (event instanceof SessionExpiredEvent) { + sessionExpiredEvents++; + } + } + + public int getSessionCreatedEvents() { + return sessionCreatedEvents; + } + + public int getSessionDeletedEvents() { + return sessionDeletedEvents; + } + + public int getSessionExpiredEvents() { + return sessionExpiredEvents; + } + +} diff --git a/redisson/src/test/java/org/redisson/spring/session/TestServlet.java b/redisson/src/test/java/org/redisson/spring/session/TestServlet.java new file mode 100644 index 000000000..a011e9f24 --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/TestServlet.java @@ -0,0 +1,96 @@ +package org.redisson.spring.session; + +import java.io.IOException; + +import javax.servlet.ServletException; +import javax.servlet.annotation.WebServlet; +import javax.servlet.http.HttpServlet; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; +import javax.servlet.http.HttpSession; + +@WebServlet(name = "/testServlet", urlPatterns = "/*") +public class TestServlet extends HttpServlet { + + private static final long serialVersionUID = 1243830648280853203L; + + @Override + protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { + HttpSession session = req.getSession(); + + if (req.getPathInfo().equals("/write")) { + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/read")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + Object attr = session.getAttribute(key); + resp.getWriter().print(attr); + } else if (req.getPathInfo().equals("/remove")) { + String[] params = req.getQueryString().split("&"); + String key = null; + for (String param : params) { + String[] line = param.split("="); + String keyParam = line[0]; + if ("key".equals(keyParam)) { + key = line[1]; + } + } + + session.removeAttribute(key); + resp.getWriter().print(String.valueOf(session.getAttribute(key))); + } else if (req.getPathInfo().equals("/invalidate")) { + session.invalidate(); + + resp.getWriter().print("OK"); + } else if (req.getPathInfo().equals("/recreate")) { + session.invalidate(); + + session = req.getSession(); + + String[] params = req.getQueryString().split("&"); + String key = null; + String value = null; + for (String param : params) { + String[] paramLine = param.split("="); + String keyParam = paramLine[0]; + String valueParam = paramLine[1]; + + if ("key".equals(keyParam)) { + key = valueParam; + } + if ("value".equals(keyParam)) { + value = valueParam; + } + } + session.setAttribute(key, value); + + resp.getWriter().print("OK"); + } + } + +} diff --git a/redisson/src/test/java/org/redisson/spring/session/TomcatServer.java b/redisson/src/test/java/org/redisson/spring/session/TomcatServer.java new file mode 100644 index 000000000..68c1d3a04 --- /dev/null +++ b/redisson/src/test/java/org/redisson/spring/session/TomcatServer.java @@ -0,0 +1,61 @@ +package org.redisson.spring.session; + +import java.io.File; +import java.net.MalformedURLException; + +import javax.servlet.ServletContext; +import javax.servlet.ServletException; + +import org.apache.catalina.LifecycleException; +import org.apache.catalina.core.StandardContext; +import org.apache.catalina.startup.Tomcat; +import org.apache.naming.resources.VirtualDirContext; + +public class TomcatServer { + + private Tomcat tomcat = new Tomcat(); + private StandardContext ctx; + + public TomcatServer(String contextPath, int port, String appBase) throws MalformedURLException, ServletException { + if(contextPath == null || appBase == null || appBase.length() == 0) { + throw new IllegalArgumentException("Context path or appbase should not be null"); + } + if(!contextPath.startsWith("/")) { + contextPath = "/" + contextPath; + } + + tomcat.setBaseDir("."); // location where temp dir is created + tomcat.setPort(port); + tomcat.getHost().setAppBase("."); + + ctx = (StandardContext) tomcat.addWebapp(contextPath, appBase); + ctx.setDelegate(true); + + File additionWebInfClasses = new File("target/test-classes"); + VirtualDirContext resources = new VirtualDirContext(); + resources.setExtraResourcePaths("/WEB-INF/classes=" + additionWebInfClasses); + ctx.setResources(resources); + } + + /** + * Start the tomcat embedded server + */ + public void start() throws LifecycleException { + tomcat.start(); + } + + /** + * Stop the tomcat embedded server + */ + public void stop() throws LifecycleException { + tomcat.stop(); + tomcat.destroy(); + tomcat.getServer().await(); + } + + public ServletContext getServletContext() { + return ctx.getServletContext(); + } + + +} \ No newline at end of file diff --git a/redisson/src/test/resources/org/redisson/jcache/redisson-jcache.json b/redisson/src/test/resources/org/redisson/jcache/redisson-jcache.json new file mode 100644 index 000000000..484195c43 --- /dev/null +++ b/redisson/src/test/resources/org/redisson/jcache/redisson-jcache.json @@ -0,0 +1,5 @@ +{ + "singleServerConfig":{ + "address": "redis://127.0.0.1:6311" + } +} \ No newline at end of file diff --git a/redisson/src/test/resources/redis_connectionListener_test.conf b/redisson/src/test/resources/redis_connectionListener_test.conf deleted file mode 100644 index d91a36715..000000000 --- a/redisson/src/test/resources/redis_connectionListener_test.conf +++ /dev/null @@ -1,622 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -#pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6319 - -# If you want you can bind a single interface, if the bind option is not -# specified all the interfaces will listen for incoming connections. -# -# bind 127.0.0.1 - -# Specify the path for the unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 755 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 0 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel debug - -# Specify the log file name. Also 'stdout' can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "stdout" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################# -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving at all commenting all the "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -#save 900 1 -#save 300 10 -#save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in an hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# distater will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usually even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -#dbfilename "dump.rdb" - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir "C:\\Devel\\projects\\redis" - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. Note that the configuration is local to the slave -# so for example it is possible to configure the slave to save the DB with a -# different interval, or to listen to another port, and so on. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets a timeout for both Bulk transfer I/O timeout and -# master data or ping response timeout. The default value is 60 seconds. -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one wtih priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -#requirepass mypass - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# accordingly to the eviction policy selected (see maxmemmory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# an hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key accordingly to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are not suitable keys for eviction. -# -# At the date of writing this commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly no - -# The name of the append only file (default: "appendonly.aof") -# appendfilename appendonly.aof - -# The fsync() call tells the Operating System to actually write data on disk -# instead to wait for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log . Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceed the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write commands was -# already issue by the script but the user don't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happens to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into an hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# active rehashing the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply form time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients -# slave -> slave clients and MONITOR clients -# pubsub -> clients subcribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeot, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are perforemd with the same frequency, but Redis checks for -# tasks to perform accordingly to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -#aof-rewrite-incremental-fsync yes - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis server but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# include /path/to/local.conf -# include /path/to/other.conf -# Generated by CONFIG REWRITE - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#notify-keyspace-events "xE" -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 - -notify-keyspace-events "xE" diff --git a/redisson/src/test/resources/redis_multiLock_test_instance1.conf b/redisson/src/test/resources/redis_multiLock_test_instance1.conf deleted file mode 100644 index fea303d1e..000000000 --- a/redisson/src/test/resources/redis_multiLock_test_instance1.conf +++ /dev/null @@ -1,622 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -#pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6320 - -# If you want you can bind a single interface, if the bind option is not -# specified all the interfaces will listen for incoming connections. -# -# bind 127.0.0.1 - -# Specify the path for the unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 755 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 0 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel debug - -# Specify the log file name. Also 'stdout' can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "stdout" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################# -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving at all commenting all the "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -#save 900 1 -#save 300 10 -#save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in an hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# distater will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usually even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -#dbfilename "dump.rdb" - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir "C:\\Devel\\projects\\redis" - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. Note that the configuration is local to the slave -# so for example it is possible to configure the slave to save the DB with a -# different interval, or to listen to another port, and so on. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets a timeout for both Bulk transfer I/O timeout and -# master data or ping response timeout. The default value is 60 seconds. -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one wtih priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -#requirepass mypass - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# accordingly to the eviction policy selected (see maxmemmory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# an hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key accordingly to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are not suitable keys for eviction. -# -# At the date of writing this commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly no - -# The name of the append only file (default: "appendonly.aof") -# appendfilename appendonly.aof - -# The fsync() call tells the Operating System to actually write data on disk -# instead to wait for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log . Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceed the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write commands was -# already issue by the script but the user don't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happens to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into an hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# active rehashing the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply form time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients -# slave -> slave clients and MONITOR clients -# pubsub -> clients subcribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeot, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are perforemd with the same frequency, but Redis checks for -# tasks to perform accordingly to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -#aof-rewrite-incremental-fsync yes - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis server but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# include /path/to/local.conf -# include /path/to/other.conf -# Generated by CONFIG REWRITE - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#notify-keyspace-events "xE" -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 - -notify-keyspace-events "xE" diff --git a/redisson/src/test/resources/redis_multiLock_test_instance2.conf b/redisson/src/test/resources/redis_multiLock_test_instance2.conf deleted file mode 100644 index 1105e15c8..000000000 --- a/redisson/src/test/resources/redis_multiLock_test_instance2.conf +++ /dev/null @@ -1,622 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -#pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6321 - -# If you want you can bind a single interface, if the bind option is not -# specified all the interfaces will listen for incoming connections. -# -# bind 127.0.0.1 - -# Specify the path for the unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 755 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 0 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel debug - -# Specify the log file name. Also 'stdout' can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "stdout" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################# -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving at all commenting all the "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -#save 900 1 -#save 300 10 -#save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in an hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# distater will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usually even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -#dbfilename "dump.rdb" - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir "C:\\Devel\\projects\\redis" - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. Note that the configuration is local to the slave -# so for example it is possible to configure the slave to save the DB with a -# different interval, or to listen to another port, and so on. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets a timeout for both Bulk transfer I/O timeout and -# master data or ping response timeout. The default value is 60 seconds. -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one wtih priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -#requirepass mypass - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# accordingly to the eviction policy selected (see maxmemmory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# an hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key accordingly to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are not suitable keys for eviction. -# -# At the date of writing this commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly no - -# The name of the append only file (default: "appendonly.aof") -# appendfilename appendonly.aof - -# The fsync() call tells the Operating System to actually write data on disk -# instead to wait for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log . Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceed the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write commands was -# already issue by the script but the user don't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happens to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into an hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# active rehashing the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply form time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients -# slave -> slave clients and MONITOR clients -# pubsub -> clients subcribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeot, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are perforemd with the same frequency, but Redis checks for -# tasks to perform accordingly to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -#aof-rewrite-incremental-fsync yes - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis server but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# include /path/to/local.conf -# include /path/to/other.conf -# Generated by CONFIG REWRITE - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#notify-keyspace-events "xE" -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 - -notify-keyspace-events "xE" diff --git a/redisson/src/test/resources/redis_multiLock_test_instance3.conf b/redisson/src/test/resources/redis_multiLock_test_instance3.conf deleted file mode 100644 index dda5c9b08..000000000 --- a/redisson/src/test/resources/redis_multiLock_test_instance3.conf +++ /dev/null @@ -1,622 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -#pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6322 - -# If you want you can bind a single interface, if the bind option is not -# specified all the interfaces will listen for incoming connections. -# -# bind 127.0.0.1 - -# Specify the path for the unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 755 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 0 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel debug - -# Specify the log file name. Also 'stdout' can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "stdout" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################# -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving at all commenting all the "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -#save 900 1 -#save 300 10 -#save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in an hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# distater will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usually even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -#dbfilename "dump.rdb" - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir "C:\\Devel\\projects\\redis" - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. Note that the configuration is local to the slave -# so for example it is possible to configure the slave to save the DB with a -# different interval, or to listen to another port, and so on. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets a timeout for both Bulk transfer I/O timeout and -# master data or ping response timeout. The default value is 60 seconds. -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one wtih priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -#requirepass mypass - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# accordingly to the eviction policy selected (see maxmemmory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# an hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -# maxmemory - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key accordingly to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are not suitable keys for eviction. -# -# At the date of writing this commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly no - -# The name of the append only file (default: "appendonly.aof") -# appendfilename appendonly.aof - -# The fsync() call tells the Operating System to actually write data on disk -# instead to wait for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log . Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceed the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write commands was -# already issue by the script but the user don't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happens to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into an hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# active rehashing the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply form time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients -# slave -> slave clients and MONITOR clients -# pubsub -> clients subcribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeot, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are perforemd with the same frequency, but Redis checks for -# tasks to perform accordingly to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -#aof-rewrite-incremental-fsync yes - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis server but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# include /path/to/local.conf -# include /path/to/other.conf -# Generated by CONFIG REWRITE - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#notify-keyspace-events "xE" -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 - -notify-keyspace-events "xE" diff --git a/redisson/src/test/resources/redis_oom_test.conf b/redisson/src/test/resources/redis_oom_test.conf deleted file mode 100644 index 19e4bb24e..000000000 --- a/redisson/src/test/resources/redis_oom_test.conf +++ /dev/null @@ -1,622 +0,0 @@ -# Redis configuration file example - -# Note on units: when memory size is needed, it is possible to specify -# it in the usual form of 1k 5GB 4M and so forth: -# -# 1k => 1000 bytes -# 1kb => 1024 bytes -# 1m => 1000000 bytes -# 1mb => 1024*1024 bytes -# 1g => 1000000000 bytes -# 1gb => 1024*1024*1024 bytes -# -# units are case insensitive so 1GB 1Gb 1gB are all the same. - -# By default Redis does not run as a daemon. Use 'yes' if you need it. -# Note that Redis will write a pid file in /var/run/redis.pid when daemonized. -daemonize no - -# When running daemonized, Redis writes a pid file in /var/run/redis.pid by -# default. You can specify a custom pid file location here. -#pidfile /var/run/redis.pid - -# Accept connections on the specified port, default is 6379. -# If port 0 is specified Redis will not listen on a TCP socket. -port 6319 - -# If you want you can bind a single interface, if the bind option is not -# specified all the interfaces will listen for incoming connections. -# -# bind 127.0.0.1 - -# Specify the path for the unix socket that will be used to listen for -# incoming connections. There is no default, so Redis will not listen -# on a unix socket when not specified. -# -# unixsocket /tmp/redis.sock -# unixsocketperm 755 - -# Close the connection after a client is idle for N seconds (0 to disable) -timeout 0 - -# TCP keepalive. -# -# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence -# of communication. This is useful for two reasons: -# -# 1) Detect dead peers. -# 2) Take the connection alive from the point of view of network -# equipment in the middle. -# -# On Linux, the specified value (in seconds) is the period used to send ACKs. -# Note that to close the connection the double of the time is needed. -# On other kernels the period depends on the kernel configuration. -# -# A reasonable value for this option is 60 seconds. -tcp-keepalive 0 - -# Specify the server verbosity level. -# This can be one of: -# debug (a lot of information, useful for development/testing) -# verbose (many rarely useful info, but not a mess like the debug level) -# notice (moderately verbose, what you want in production probably) -# warning (only very important / critical messages are logged) -loglevel debug - -# Specify the log file name. Also 'stdout' can be used to force -# Redis to log on the standard output. Note that if you use standard -# output for logging but daemonize, logs will be sent to /dev/null -logfile "stdout" - -# To enable logging to the system logger, just set 'syslog-enabled' to yes, -# and optionally update the other syslog parameters to suit your needs. -# syslog-enabled no - -# Specify the syslog identity. -# syslog-ident redis - -# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. -# syslog-facility local0 - -# Set the number of databases. The default database is DB 0, you can select -# a different one on a per-connection basis using SELECT where -# dbid is a number between 0 and 'databases'-1 -databases 16 - -################################ SNAPSHOTTING ################################# -# -# Save the DB on disk: -# -# save -# -# Will save the DB if both the given number of seconds and the given -# number of write operations against the DB occurred. -# -# In the example below the behaviour will be to save: -# after 900 sec (15 min) if at least 1 key changed -# after 300 sec (5 min) if at least 10 keys changed -# after 60 sec if at least 10000 keys changed -# -# Note: you can disable saving at all commenting all the "save" lines. -# -# It is also possible to remove all the previously configured save -# points by adding a save directive with a single empty string argument -# like in the following example: -# -# save "" - -#save 900 1 -#save 300 10 -#save 60 10000 - -# By default Redis will stop accepting writes if RDB snapshots are enabled -# (at least one save point) and the latest background save failed. -# This will make the user aware (in an hard way) that data is not persisting -# on disk properly, otherwise chances are that no one will notice and some -# distater will happen. -# -# If the background saving process will start working again Redis will -# automatically allow writes again. -# -# However if you have setup your proper monitoring of the Redis server -# and persistence, you may want to disable this feature so that Redis will -# continue to work as usually even if there are problems with disk, -# permissions, and so forth. -stop-writes-on-bgsave-error yes - -# Compress string objects using LZF when dump .rdb databases? -# For default that's set to 'yes' as it's almost always a win. -# If you want to save some CPU in the saving child set it to 'no' but -# the dataset will likely be bigger if you have compressible values or keys. -rdbcompression yes - -# Since version 5 of RDB a CRC64 checksum is placed at the end of the file. -# This makes the format more resistant to corruption but there is a performance -# hit to pay (around 10%) when saving and loading RDB files, so you can disable it -# for maximum performances. -# -# RDB files created with checksum disabled have a checksum of zero that will -# tell the loading code to skip the check. -rdbchecksum yes - -# The filename where to dump the DB -#dbfilename "dump.rdb" - -# The working directory. -# -# The DB will be written inside this directory, with the filename specified -# above using the 'dbfilename' configuration directive. -# -# The Append Only File will also be created inside this directory. -# -# Note that you must specify a directory here, not a file name. -dir "C:\\Devel\\projects\\redis" - -################################# REPLICATION ################################# - -# Master-Slave replication. Use slaveof to make a Redis instance a copy of -# another Redis server. Note that the configuration is local to the slave -# so for example it is possible to configure the slave to save the DB with a -# different interval, or to listen to another port, and so on. -# -# slaveof - -# If the master is password protected (using the "requirepass" configuration -# directive below) it is possible to tell the slave to authenticate before -# starting the replication synchronization process, otherwise the master will -# refuse the slave request. -# -# masterauth - -# When a slave loses its connection with the master, or when the replication -# is still in progress, the slave can act in two different ways: -# -# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will -# still reply to client requests, possibly with out of date data, or the -# data set may just be empty if this is the first synchronization. -# -# 2) if slave-serve-stale-data is set to 'no' the slave will reply with -# an error "SYNC with master in progress" to all the kind of commands -# but to INFO and SLAVEOF. -# -slave-serve-stale-data yes - -# You can configure a slave instance to accept writes or not. Writing against -# a slave instance may be useful to store some ephemeral data (because data -# written on a slave will be easily deleted after resync with the master) but -# may also cause problems if clients are writing to it because of a -# misconfiguration. -# -# Since Redis 2.6 by default slaves are read-only. -# -# Note: read only slaves are not designed to be exposed to untrusted clients -# on the internet. It's just a protection layer against misuse of the instance. -# Still a read only slave exports by default all the administrative commands -# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve -# security of read only slaves using 'rename-command' to shadow all the -# administrative / dangerous commands. -slave-read-only yes - -# Slaves send PINGs to server in a predefined interval. It's possible to change -# this interval with the repl_ping_slave_period option. The default value is 10 -# seconds. -# -# repl-ping-slave-period 10 - -# The following option sets a timeout for both Bulk transfer I/O timeout and -# master data or ping response timeout. The default value is 60 seconds. -# -# It is important to make sure that this value is greater than the value -# specified for repl-ping-slave-period otherwise a timeout will be detected -# every time there is low traffic between the master and the slave. -# -# repl-timeout 60 - -# Disable TCP_NODELAY on the slave socket after SYNC? -# -# If you select "yes" Redis will use a smaller number of TCP packets and -# less bandwidth to send data to slaves. But this can add a delay for -# the data to appear on the slave side, up to 40 milliseconds with -# Linux kernels using a default configuration. -# -# If you select "no" the delay for data to appear on the slave side will -# be reduced but more bandwidth will be used for replication. -# -# By default we optimize for low latency, but in very high traffic conditions -# or when the master and slaves are many hops away, turning this to "yes" may -# be a good idea. -repl-disable-tcp-nodelay no - -# The slave priority is an integer number published by Redis in the INFO output. -# It is used by Redis Sentinel in order to select a slave to promote into a -# master if the master is no longer working correctly. -# -# A slave with a low priority number is considered better for promotion, so -# for instance if there are three slaves with priority 10, 100, 25 Sentinel will -# pick the one wtih priority 10, that is the lowest. -# -# However a special priority of 0 marks the slave as not able to perform the -# role of master, so a slave with priority of 0 will never be selected by -# Redis Sentinel for promotion. -# -# By default the priority is 100. -slave-priority 100 - -################################## SECURITY ################################### - -# Require clients to issue AUTH before processing any other -# commands. This might be useful in environments in which you do not trust -# others with access to the host running redis-server. -# -# This should stay commented out for backward compatibility and because most -# people do not need auth (e.g. they run their own servers). -# -# Warning: since Redis is pretty fast an outside user can try up to -# 150k passwords per second against a good box. This means that you should -# use a very strong password otherwise it will be very easy to break. -# -#requirepass mypass - -# Command renaming. -# -# It is possible to change the name of dangerous commands in a shared -# environment. For instance the CONFIG command may be renamed into something -# hard to guess so that it will still be available for internal-use tools -# but not available for general clients. -# -# Example: -# -# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 -# -# It is also possible to completely kill a command by renaming it into -# an empty string: -# -# rename-command CONFIG "" -# -# Please note that changing the name of commands that are logged into the -# AOF file or transmitted to slaves may cause problems. - -################################### LIMITS #################################### - -# Set the max number of connected clients at the same time. By default -# this limit is set to 10000 clients, however if the Redis server is not -# able to configure the process file limit to allow for the specified limit -# the max number of allowed clients is set to the current file limit -# minus 32 (as Redis reserves a few file descriptors for internal uses). -# -# Once the limit is reached Redis will close all the new connections sending -# an error 'max number of clients reached'. -# -# maxclients 10000 - -# Don't use more memory than the specified amount of bytes. -# When the memory limit is reached Redis will try to remove keys -# accordingly to the eviction policy selected (see maxmemmory-policy). -# -# If Redis can't remove keys according to the policy, or if the policy is -# set to 'noeviction', Redis will start to reply with errors to commands -# that would use more memory, like SET, LPUSH, and so on, and will continue -# to reply to read-only commands like GET. -# -# This option is usually useful when using Redis as an LRU cache, or to set -# an hard memory limit for an instance (using the 'noeviction' policy). -# -# WARNING: If you have slaves attached to an instance with maxmemory on, -# the size of the output buffers needed to feed the slaves are subtracted -# from the used memory count, so that network problems / resyncs will -# not trigger a loop where keys are evicted, and in turn the output -# buffer of slaves is full with DELs of keys evicted triggering the deletion -# of more keys, and so forth until the database is completely emptied. -# -# In short... if you have slaves attached it is suggested that you set a lower -# limit for maxmemory so that there is some free RAM on the system for slave -# output buffers (but this is not needed if the policy is 'noeviction'). -# -maxmemory 1mb - -# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory -# is reached. You can select among five behaviors: -# -# volatile-lru -> remove the key with an expire set using an LRU algorithm -# allkeys-lru -> remove any key accordingly to the LRU algorithm -# volatile-random -> remove a random key with an expire set -# allkeys-random -> remove a random key, any key -# volatile-ttl -> remove the key with the nearest expire time (minor TTL) -# noeviction -> don't expire at all, just return an error on write operations -# -# Note: with any of the above policies, Redis will return an error on write -# operations, when there are not suitable keys for eviction. -# -# At the date of writing this commands are: set setnx setex append -# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd -# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby -# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby -# getset mset msetnx exec sort -# -# The default is: -# -# maxmemory-policy volatile-lru - -# LRU and minimal TTL algorithms are not precise algorithms but approximated -# algorithms (in order to save memory), so you can select as well the sample -# size to check. For instance for default Redis will check three keys and -# pick the one that was used less recently, you can change the sample size -# using the following configuration directive. -# -# maxmemory-samples 3 - -############################## APPEND ONLY MODE ############################### - -# By default Redis asynchronously dumps the dataset on disk. This mode is -# good enough in many applications, but an issue with the Redis process or -# a power outage may result into a few minutes of writes lost (depending on -# the configured save points). -# -# The Append Only File is an alternative persistence mode that provides -# much better durability. For instance using the default data fsync policy -# (see later in the config file) Redis can lose just one second of writes in a -# dramatic event like a server power outage, or a single write if something -# wrong with the Redis process itself happens, but the operating system is -# still running correctly. -# -# AOF and RDB persistence can be enabled at the same time without problems. -# If the AOF is enabled on startup Redis will load the AOF, that is the file -# with the better durability guarantees. -# -# Please check http://redis.io/topics/persistence for more information. - -appendonly no - -# The name of the append only file (default: "appendonly.aof") -# appendfilename appendonly.aof - -# The fsync() call tells the Operating System to actually write data on disk -# instead to wait for more data in the output buffer. Some OS will really flush -# data on disk, some other OS will just try to do it ASAP. -# -# Redis supports three different modes: -# -# no: don't fsync, just let the OS flush the data when it wants. Faster. -# always: fsync after every write to the append only log . Slow, Safest. -# everysec: fsync only one time every second. Compromise. -# -# The default is "everysec", as that's usually the right compromise between -# speed and data safety. It's up to you to understand if you can relax this to -# "no" that will let the operating system flush the output buffer when -# it wants, for better performances (but if you can live with the idea of -# some data loss consider the default persistence mode that's snapshotting), -# or on the contrary, use "always" that's very slow but a bit safer than -# everysec. -# -# More details please check the following article: -# http://antirez.com/post/redis-persistence-demystified.html -# -# If unsure, use "everysec". - -# appendfsync always -appendfsync everysec -# appendfsync no - -# When the AOF fsync policy is set to always or everysec, and a background -# saving process (a background save or AOF log background rewriting) is -# performing a lot of I/O against the disk, in some Linux configurations -# Redis may block too long on the fsync() call. Note that there is no fix for -# this currently, as even performing fsync in a different thread will block -# our synchronous write(2) call. -# -# In order to mitigate this problem it's possible to use the following option -# that will prevent fsync() from being called in the main process while a -# BGSAVE or BGREWRITEAOF is in progress. -# -# This means that while another child is saving, the durability of Redis is -# the same as "appendfsync none". In practical terms, this means that it is -# possible to lose up to 30 seconds of log in the worst scenario (with the -# default Linux settings). -# -# If you have latency problems turn this to "yes". Otherwise leave it as -# "no" that is the safest pick from the point of view of durability. -no-appendfsync-on-rewrite no - -# Automatic rewrite of the append only file. -# Redis is able to automatically rewrite the log file implicitly calling -# BGREWRITEAOF when the AOF log size grows by the specified percentage. -# -# This is how it works: Redis remembers the size of the AOF file after the -# latest rewrite (if no rewrite has happened since the restart, the size of -# the AOF at startup is used). -# -# This base size is compared to the current size. If the current size is -# bigger than the specified percentage, the rewrite is triggered. Also -# you need to specify a minimal size for the AOF file to be rewritten, this -# is useful to avoid rewriting the AOF file even if the percentage increase -# is reached but it is still pretty small. -# -# Specify a percentage of zero in order to disable the automatic AOF -# rewrite feature. - -auto-aof-rewrite-percentage 100 -auto-aof-rewrite-min-size 64mb - -################################ LUA SCRIPTING ############################### - -# Max execution time of a Lua script in milliseconds. -# -# If the maximum execution time is reached Redis will log that a script is -# still in execution after the maximum allowed time and will start to -# reply to queries with an error. -# -# When a long running script exceed the maximum execution time only the -# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be -# used to stop a script that did not yet called write commands. The second -# is the only way to shut down the server in the case a write commands was -# already issue by the script but the user don't want to wait for the natural -# termination of the script. -# -# Set it to 0 or a negative value for unlimited execution without warnings. -lua-time-limit 5000 - -################################## SLOW LOG ################################### - -# The Redis Slow Log is a system to log queries that exceeded a specified -# execution time. The execution time does not include the I/O operations -# like talking with the client, sending the reply and so forth, -# but just the time needed to actually execute the command (this is the only -# stage of command execution where the thread is blocked and can not serve -# other requests in the meantime). -# -# You can configure the slow log with two parameters: one tells Redis -# what is the execution time, in microseconds, to exceed in order for the -# command to get logged, and the other parameter is the length of the -# slow log. When a new command is logged the oldest one is removed from the -# queue of logged commands. - -# The following time is expressed in microseconds, so 1000000 is equivalent -# to one second. Note that a negative number disables the slow log, while -# a value of zero forces the logging of every command. -slowlog-log-slower-than 10000 - -# There is no limit to this length. Just be aware that it will consume memory. -# You can reclaim memory used by the slow log with SLOWLOG RESET. -slowlog-max-len 128 - -############################### ADVANCED CONFIG ############################### - -# Hashes are encoded using a memory efficient data structure when they have a -# small number of entries, and the biggest entry does not exceed a given -# threshold. These thresholds can be configured using the following directives. -hash-max-ziplist-entries 512 -hash-max-ziplist-value 64 - -# Similarly to hashes, small lists are also encoded in a special way in order -# to save a lot of space. The special representation is only used when -# you are under the following limits: -list-max-ziplist-entries 512 -list-max-ziplist-value 64 - -# Sets have a special encoding in just one case: when a set is composed -# of just strings that happens to be integers in radix 10 in the range -# of 64 bit signed integers. -# The following configuration setting sets the limit in the size of the -# set in order to use this special memory saving encoding. -set-max-intset-entries 512 - -# Similarly to hashes and lists, sorted sets are also specially encoded in -# order to save a lot of space. This encoding is only used when the length and -# elements of a sorted set are below the following limits: -zset-max-ziplist-entries 128 -zset-max-ziplist-value 64 - -# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in -# order to help rehashing the main Redis hash table (the one mapping top-level -# keys to values). The hash table implementation Redis uses (see dict.c) -# performs a lazy rehashing: the more operation you run into an hash table -# that is rehashing, the more rehashing "steps" are performed, so if the -# server is idle the rehashing is never complete and some more memory is used -# by the hash table. -# -# The default is to use this millisecond 10 times every second in order to -# active rehashing the main dictionaries, freeing memory when possible. -# -# If unsure: -# use "activerehashing no" if you have hard latency requirements and it is -# not a good thing in your environment that Redis can reply form time to time -# to queries with 2 milliseconds delay. -# -# use "activerehashing yes" if you don't have such hard requirements but -# want to free memory asap when possible. -activerehashing yes - -# The client output buffer limits can be used to force disconnection of clients -# that are not reading data from the server fast enough for some reason (a -# common reason is that a Pub/Sub client can't consume messages as fast as the -# publisher can produce them). -# -# The limit can be set differently for the three different classes of clients: -# -# normal -> normal clients -# slave -> slave clients and MONITOR clients -# pubsub -> clients subcribed to at least one pubsub channel or pattern -# -# The syntax of every client-output-buffer-limit directive is the following: -# -# client-output-buffer-limit -# -# A client is immediately disconnected once the hard limit is reached, or if -# the soft limit is reached and remains reached for the specified number of -# seconds (continuously). -# So for instance if the hard limit is 32 megabytes and the soft limit is -# 16 megabytes / 10 seconds, the client will get disconnected immediately -# if the size of the output buffers reach 32 megabytes, but will also get -# disconnected if the client reaches 16 megabytes and continuously overcomes -# the limit for 10 seconds. -# -# By default normal clients are not limited because they don't receive data -# without asking (in a push way), but just after a request, so only -# asynchronous clients may create a scenario where data is requested faster -# than it can read. -# -# Instead there is a default limit for pubsub and slave clients, since -# subscribers and slaves receive data in a push fashion. -# -# Both the hard or the soft limit can be disabled by setting them to zero. -client-output-buffer-limit normal 0 0 0 -client-output-buffer-limit slave 256mb 64mb 60 -client-output-buffer-limit pubsub 32mb 8mb 60 - -# Redis calls an internal function to perform many background tasks, like -# closing connections of clients in timeot, purging expired keys that are -# never requested, and so forth. -# -# Not all tasks are perforemd with the same frequency, but Redis checks for -# tasks to perform accordingly to the specified "hz" value. -# -# By default "hz" is set to 10. Raising the value will use more CPU when -# Redis is idle, but at the same time will make Redis more responsive when -# there are many keys expiring at the same time, and timeouts may be -# handled with more precision. -# -# The range is between 1 and 500, however a value over 100 is usually not -# a good idea. Most users should use the default of 10 and raise this up to -# 100 only in environments where very low latency is required. -hz 10 - -# When a child rewrites the AOF file, if the following option is enabled -# the file will be fsync-ed every 32 MB of data generated. This is useful -# in order to commit the file to the disk more incrementally and avoid -# big latency spikes. -#aof-rewrite-incremental-fsync yes - -################################## INCLUDES ################################### - -# Include one or more other config files here. This is useful if you -# have a standard template that goes to all Redis server but also need -# to customize a few per-server settings. Include files can include -# other files, so use this wisely. -# -# include /path/to/local.conf -# include /path/to/other.conf -# Generated by CONFIG REWRITE - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#notify-keyspace-events "xE" -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 -#slaveof 127.0.0.1 6399 - -#slaveof 127.0.0.1 6389 -#slaveof 127.0.0.1 6389 - -#slaveof 127.0.0.1 6399 - -notify-keyspace-events "xE"