Quantcast
Channel: Active questions tagged redis+java - Stack Overflow
Viewing all 2203 articles
Browse latest View live

Range Querying in Redis - Spring Data Redis

$
0
0

is there a way we can implement Range queries in Redis using Spring Data Redis?

Eg:

If my Pojo class has Date(which is not a unique identifier in my DataModel) and i require data that falls under a desired period of date, Is it possible with Spring Data Redis to construct a query for the same rather than querying each date individually?


Multi-Field Querying on Redis Using Redis Spring

$
0
0

this will be a very baic question since im new to Spring-Redis

Im currently in the process of learning about Redis database and I'm working on a feature on priority, im compelled to Use Redis for this feature. Below in the challenge/Query im having.

Right now we have a DataModel as below:

@RedisHash("Org_Work")
public class OrgWork {

   private @Id @Indexed UUID id;
   private @Indexed String CorpDetails;
   private @Indexed String ContractType;
   private @Indexed String ContractAssigned;
   private @Indexed String State;
   private @Indexed String Country; 

}
public interface OrgWorkRepository extends CrudRepository<HoopCalendar, String> {

List<OrgWork> findByCorpDetailsAndContractTypeAndStateAndCountry(String CorpDetails, String ContractType, String ContractAssigned, String State, String Country);

}

we are developing an API to query on the above Datamodel where the front-end will send us CorpDetails ,ContractType, ContractAssigned, State and Country fields and we have to query these against the Redis Database and return back the DurationOfWork object.

In this case I will be having a load of approx. 100000 calls per minute.

Please let me know on if this is the right way and some suggestions on improving response time.

***Updated the query

Java 8 Lambda expression with Serialization

$
0
0

In our Web application project, we are using the Redis to manage the session. To support it, we are serializing any object which will be stored in the session.

For example, we use DTO's to hold the bean data which is used to display on the screen. Even if the DTO having any other object inside (Composition) we also have to serialize it otherwise, we get NotSerializableException.

I had a problem when I was creating an anonymous inner class to implement the Comparator like below:

Collections.sort(people, new Comparator<Person>() {
    public int compare(Person p1, Person p2) {
        return p1.getLastName().compareTo(p2.getLastName());
    }
});

The above code threw the NotSerializableException and I resolved it by creating a class that implements the Comparator as well as Serializable interface. The problem was, it was thrown inside the JSP page which was using this DTO. I had to do a lot of debugging to find the actual problem.

But now, I'm wondering to change the above code to use Lambda expression like below:

Collections.sort(people, (p1, p2) -> p1.getLastName().compareTo(p2.getLastName()));

However, I fear the same exception might occur. Does the Lambda expression create objects internally?

Best practices for connection handling in lettuce.io (Redis)

$
0
0

we are trying to build a Server Sent Events server in java with springboot (webflux) and we want to use REDIS with PUB/SUB. We are using lettuce.io (v5.2.1) as driver. In particular, we are using reactive api. It's not clear to me and I cannot find any exhaustive documentation about how to effectively use connections. I started with the idea to use just one StatefulRedisPubSubConnection but then if I use it to subscribe to a channel it looks like the status of the connection changes and I cannot perform PUBLISH commands using that connection. So my idea is to instantiate just one StatefulRedisPubSubConnection to subscribe and another to PUBLISH but should I check something before using these connections? Is one connection enough? Will lettuce.io reconnect on its own upon failures? We are running load tests to understand how many connections a server can handle but honestly it's very difficult to understand how we should use lettuce.io and how this library is supposed to be resilient and to guarantee the best throughput. Thanks a lot!

Usage of the JedisPoolConfig parameter *blockWhenExhausted*

$
0
0

So I have a project where I'm using the Spring-Data-Redis to cache some data. The Spring-Data-Redis is setup with Jedis using Bean Configuration.

I looked for the JedisPoolConfig parameters that can be modified to control the behavior of my Caching and App.

I will like to know the use of the property, blockWhenExhausted, which is part of the configurable properties. The default value is said to be true, what behaviour will this default value invoke? If the value is changed to false, what behaviour will this bring it?

Custom codec using ByteBuffer put/get APIs directly to convert java object to/from ByteBuffer not working with lettuce/redis

$
0
0

I have a java object which has some int/string/enum fields

public class Key{
      String name;
      int id;
      Map<Integer, Type> valueMap; //type is an enum

       public void write(final ByteBuffer byteBuffer) throws IOException {
        byteBuffer.clear();
        byte[] nameBytes=  name.getBytes(Charset.forName("UTF-8"));
        byteBuffer.putInt(nameBytes.length);
        byteBuffer.put(nameBytes);
        byteBuffer.putInt(id);
        writeMap(valueMap, byteBuffer);
    }

    private void writeMap(Map<Integer, Type> map, ByteBuffer byteBuffer) throws IOException
    {
        byteBuffer.putInt(map.size());
        for(Map.Entry<Integer, Type> e : map.entrySet()) {
            byteBuffer.putInt(e.getKey().getId());
            byteBuffer.putInt(e.getValue().getId());
        }
    }

    public static Key read(final ByteBuffer byteBuffer) throws IOException {
        int stringLen = byteBuffer.getInt();
        byte[] nameBytes=  new byte[stringLen];
        byteBuffer.get(nameBytes);
        String name= new String(name);
        int id= byteBuffer.getInt();
        Map<Integer, Type> valueMap= readMap(byteBuffer);
        return new Key(name, id, map);
    }

     private static Map<Integer, Type> readMap(ByteBuffer byteBuffer) throws IOException
    {
        int r = byteBuffer.getInt();
        if(r==-1) {
            return ImmutableMap.of();
        } else {
            ImmutableMap.Builder<Integer, Type> mm = ImmutableMap.builder();
            for(int i=0; i<r; i++) {
                int k = byteBuffer.getInt();
                Type v = Type.of(byteBuffer.getInt());
                mm.put(k,v);
            }
            return mm.build();
        }
    }

    .....//constructor and getter/setter present
}


CustomCodec:



public class CustomCodec implements RedisCodec<Key, Set<Long>> {

    @Override
    public Key decodeKey(ByteBuffer bytes) {
            try {
                bytes.flip();
                return Key.read(bytes);
            } catch (IOException e) {
                return null;
            }

    }

    @Override
    public Set<Long> decodeValue(ByteBuffer bytes) {
            bytes.flip();
            int size = bytes.getInt();
            Set<Long> values = new HashSet<Long>();
            for(int i=0; i<size; i++){
                values.add(bytes.getLong());
            }
            return values;
    }


    @Override
    public ByteBuffer encodeKey(Key key) {
        try {
                ByteBuffer byteBuffer = ByteBuffer.allocate(100);
                key.write(byteBuffer);
                return byteBuffer;
            }
        } catch (IOException e) {
            return null;
        }
    }

    @Override
    public ByteBuffer encodeValue(Set<Long> value) {
        try {
                ByteBuffer byteBuffer = ByteBuffer.allocate(value.size() * Long.BYTES + Integer.BYTES);
                byteBuffer.clear();
                byteBuffer.putInt(value.size());
                value.stream().forEach(i -> byteBuffer.putLong(i));
                byteBuffer.flip();
                return byteBuffer;
        } catch (IOException e) {
            return null;
        }
    }


}

Using this results in BufferUnderFlowException at encodeValue(). However, when i write a simple test to assert the encode and decode, it works fine May i know if anyone knows if the java objects can be converted to Bytebuffer directly and can be used as redis keys using lettuce?

I tried using ByteArrayOutputStream/ByteArrayInputStream and it works fine with it. But i read somewhere it should be faster using the ByteBuffer directly.

Thanks a lot in advance!

Connection to Encrypted ElastiCache Redis from Java using a CName

$
0
0

I am using the Lettuce driver from spring data to connect to an ElastiCache using in transit encryption. When I try to connect to the Route53 CName assigned to the ElastiCache cluster. I get this error:

Caused by: com.lambdaworks.redis.RedisException: Cannot retrieve initial cluster partitions from initial URIs [RedisURI [host='my.cname.net', port=6379]]
    at com.lambdaworks.redis.cluster.RedisClusterClient.loadPartitions(RedisClusterClient.java:507)
    at com.lambdaworks.redis.cluster.RedisClusterClient.initializePartitions(RedisClusterClient.java:481)
    at com.lambdaworks.redis.cluster.RedisClusterClient.connectClusterAsyncImpl(RedisClusterClient.java:335)
    at com.lambdaworks.redis.cluster.RedisClusterClient.connectClusterAsync(RedisClusterClient.java:273)
    at org.springframework.data.redis.connection.lettuce.LettuceClusterConnection.doGetAsyncDedicatedConnection(LettuceClusterConnection.java:1250)
    at org.springframework.data.redis.connection.lettuce.LettuceConnection.getAsyncDedicatedConnection(LettuceConnection.java:3466)
    at org.springframework.data.redis.connection.lettuce.LettuceConnection.getDedicatedConnection(LettuceConnection.java:3487)
    at org.springframework.data.redis.connection.lettuce.LettuceConnection.getConnection(LettuceConnection.java:3460)
    at org.springframework.data.redis.connection.lettuce.LettuceConnection.sMembers(LettuceConnection.java:1998)
    ... 24 common frames omitted

Here is the code I am using to connect:

@Bean
public static RedisConnectionFactory connectionFactory() {
    Map<String, Object> source = Maps.newHashMap();
    source.put("spring.redis.cluster.nodes", "my.cname.net:6379");
    RedisClusterConfiguration clusterConfiguration = new RedisClusterConfiguration(new MapPropertySource("RedisClusterConfiguration", source));
    clusterConfiguration.setMaxRedirects(10);
    LettuceConnectionFactory factory = new LettuceConnectionFactory(clusterConfiguration);
    factory.setValidateConnection(false);
    factory.setUseSsl(true);
    return factory;
}

Now when I replace the Cname with the acual network name attached to the ElastiCache cluster the program works. Does anyone know why the Program is failing only when using the Cname?

why i run into RedisSystemException: Redis command interrupted in springboot? [closed]

$
0
0

I use redis in my spring-boot application and run into this below error sometimes,can anyone help ,tell me why,many thanks:

{"log":"org.springframework.data.redis.RedisSystemException: Redis command interrupted; nested exception is io.lettuce.core.RedisCommandInterruptedException: Command interrupted\n","stream":"stderr",
{"log":"\u0009at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:62)\n","stream":"stderr","time":"2020-01-31T22:40:22.696884104Z",
{"log":"\u0009at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41)\n","stream":"stderr","time":"2020-01-31T22:40:22.696902037Z","
{"log":"\u0009at org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy.java:44)\n","stream":"stderr","time":"2020-01-31T22:40:22.69690758Z",
{"log":"\u0009at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:42)\n","stream":"stderr","time":"2020-01-31T22:40:22.696913042Z","
{"log":"\u0009at org.springframework.data.redis.connection.lettuce.LettuceConnection.convertLettuceAccessException(LettuceConnection.java:268)\n","stream":"stderr","time":"2020-01-31T22:40:22.696917716Z","
{"log":"\u0009at org.springframework.data.redis.connection.lettuce.LettuceStringCommands.convertLettuceAccessException(LettuceStringCommands.java:799)\n","stream":"stderr","time":"2020-01-31T22:40:22.696921625Z",
{"log":"\u0009at org.springframework.data.redis.connection.lettuce.LettuceStringCommands.setEx(LettuceStringCommands.java:232)\n","stream":"stderr","time":"2020-01-31T22:40:22.69692579Z","serviceName":"panorama","
{"log":"\u0009at org.springframework.data.redis.connection.DefaultedRedisConnection.setEx(DefaultedRedisConnection.java:295)\n","stream":"stderr","time":"2020-01-31T22:40:22.696929651Z","serviceName":"panorama",
{"log":"\u0009at org.springframework.data.redis.core.DefaultValueOperations$4.potentiallyUsePsetEx(DefaultValueOperations.java:268)\n","stream":"stderr","time":"2020-01-31T22:40:22.696934348Z","serviceName":
{"log":"\u0009at org.springframework.data.redis.core.DefaultValueOperations$4.doInRedis(DefaultValueOperations.java:261)\n","stream":"stderr","time":"2020-01-31T22:40:22.696938701Z","serviceName":"panorama","
{"log":"\u0009at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:224)\n","stream":"stderr","time":"2020-01-31T22:40:22.696943427Z",
{"log":"\u0009at org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:184)\n","stream":"stderr","time":"2020-01-31T22:40:22.69696076Z",
{"log":"\u0009at org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:95)\n","stream":"stderr","time":"2020-01-31T22:40:22.696965087Z","
{"log":"\u0009at org.springframework.data.redis.core.DefaultValueOperations.set(DefaultValueOperations.java:256)\n","stream":"stderr","time":"2020-01-31T22:40:22.696968671Z","
{"log":"\u0009at com.gf.crm.serviceImpl.PcClientServiceImpl.clientList(PcClientServiceImpl.java:258)\n","stream":"stderr","time":"2020-01-31T22:40:22.696976702Z","
{"log":"\u0009at jdk.internal.reflect.GeneratedMethodAccessor185.invoke(Unknown Source)\n","stream":"stderr","time":"2020-01-31T22:40:22.696980461Z","
{"log":"\u0009at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n","stream":"stderr","time":"2020-01-31T22:40:22.696983713Z",
{"log":"\u0009at java.base/java.lang.reflect.Method.invoke(Method.java:567)\n","stream":"stderr","time":"2020-01-31T22:40:22.696987223Z",
{"log":"\u0009at com.netflix.hystrix.contrib.javanica.command.MethodExecutionAction.execute(MethodExecutionAction.java:116)\n","stream":"stderr","time":"2020-01-31T22:40:22.69699075Z",
{"log":"\u0009at com.netflix.hystrix.contrib.javanica.command.MethodExecutionAction.executeWithArgs(MethodExecutionAction.java:93)\n","stream":"stderr","time":"2020-01-31T22:40:22.696994192Z",
{"log":"\u0009at com.netflix.hystrix.contrib.javanica.command.MethodExecutionAction.execute(MethodExecutionAction.java:78)\n","stream":"stderr","time":"2020-01-31T22:40:22.696997568Z",
{"log":"\u0009at com.netflix.hystrix.contrib.javanica.command.GenericCommand$1.execute(GenericCommand.java:48)\n","stream":"stderr","time":"2020-01-31T22:40:22.697001064Z",
{"log":"\u0009at com.netflix.hystrix.contrib.javanica.command.AbstractHystrixCommand.process(AbstractHystrixCommand.java:145)\n","stream":"stderr","time":"2020-01-31T22:40:22.697004726Z","
{"log":"\u0009at com.netflix.hystrix.contrib.javanica.command.GenericCommand.run(GenericCommand.java:45)\n","stream":"stderr","time":"2020-01-31T22:40:22.697008315Z","serviceName":"panorama",
{"log":"\u0009at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:302)\n","stream":"stderr","time":"2020-01-31T22:40:22.697011721Z",
{"log":"\u0009at com.netflix.hystrix.HystrixCommand$2.call(HystrixCommand.java:298)\n","stream":"stderr","time":"2020-01-31T22:40:22.697015106Z","s
{"log":"\u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:46)\n","stream":"stderr","time":"2020-01-31T22:40:22.697018461Z",
{"log":"\u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)\n","stream":"stderr","time":"2020-01-31T22:40:22.697022021Z",
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)\n","stream":"stderr","time":"2020-01-31T22:40:22.697025469Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)\n","stream":"stderr","time":"2020-01-31T22:40:22.697035708Z",
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)\n","stream":"stderr","time":"2020-01-31T22:40:22.697038986Z",
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)\n","stream":"stderr","time":"2020-01-31T22:40:22.697042399Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)\n","stream":"stderr","time":"2020-01-31T22:40:22.697045922Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)\n","stream":"stderr","time":"2020-01-31T22:40:22.697049521Z",
{"log":"\u0009at rx.Observable.unsafeSubscribe(Observable.java:10327)\n","stream":"stderr","time":"2020-01-31T22:40:22.697052985Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:51)\n","stream":"stderr","time":"2020-01-31T22:40:22.697056383Z",
{"log":"\u0009at rx.internal.operators.OnSubscribeDefer.call(OnSubscribeDefer.java:35)\n","stream":"stderr","time":"2020-01-31T22:40:22.697063918Z",
{"log":"\u0009at rx.Observable.unsafeSubscribe(Observable.java:10327)\n","stream":"stderr","time":"2020-01-31T22:40:22.697067543Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:41)\n","stream":"stderr","time":"2020-01-31T22:40:22.697071008Z",
{"log":"\u0009at rx.internal.operators.OnSubscribeDoOnEach.call(OnSubscribeDoOnEach.java:30)\n","stream":"stderr","time":"2020-01-31T22:40:22.697074374Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:48)\n","stream":"stderr","time":"2020-01-31T22:40:22.697077834Z","
{"log":"\u0009at rx.internal.operators.OnSubscribeLift.call(OnSubscribeLift.java:30)\n","stream":"stderr","time":"2020-01-31T22:40:22.697081212Z",
{"log":"\u0009at rx.Observable.unsafeSubscribe(Observable.java:10327)\n","stream":"stderr","time":"2020-01-31T22:40:22.697084588Z","
{"log":"\u0009at rx.internal.operators.OperatorSubscribeOn$SubscribeOnSubscriber.call(OperatorSubscribeOn.java:100)\n","stream":"stderr","time":"2020-01-31T22:40:22.697087973Z",
{"log":"\u0009at com.netflix.hystrix.strategy.concurrency.HystrixContexSchedulerAction$1.call(HystrixContexSchedulerAction.java:56)\n","stream":"stderr","time":"2020-01-31T22:40:22.697093471Z","
{"log":"\u0009at com.netflix.hystrix.strategy.concurrency.HystrixContexSchedulerAction$1.call(HystrixContexSchedulerAction.java:47)\n","stream":"stderr","time":"2020-01-31T22:40:22.697097512Z","
{"log":"\u0009at com.netflix.hystrix.strategy.concurrency.HystrixContexSchedulerAction.call(HystrixContexSchedulerAction.java:69)\n","stream":"stderr","time":"2020-01-31T22:40:22.697101229Z",
{"log":"\u0009at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)\n","stream":"stderr","time":"2020-01-31T22:40:22.697106644Z",
{"log":"\u0009at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n","stream":"stderr","time":"2020-01-31T22:40:22.697110337Z",
{"log":"\u0009at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n","stream":"stderr","time":"2020-01-31T22:40:22.697113873Z","serviceName":"panorama",
{"log":"\u0009at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n","stream":"stderr","time":"2020-01-31T22:40:22.697117543Z",
{"log":"\u0009at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n","stream":"stderr","time":"2020-01-31T22:40:22.697121346Z","
{"log":"\u0009at java.base/java.lang.Thread.run(Thread.java:835)\n","stream":"stderr","time":"2020-01-31T22:40:22.697127846Z","serviceName":"
{"log":"Caused by: io.lettuce.core.RedisCommandInterruptedException: Command interrupted\n","stream":"stderr","time":"2020-01-31T22:40:22.69713152Z","
{"log":"\u0009at io.lettuce.core.protocol.AsyncCommand.await(AsyncCommand.java:87)\n","stream":"stderr","time":"2020-01-31T22:40:22.697135076Z","
{"log":"\u0009at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:112)\n","stream":"stderr","time":"2020-01-31T22:40:22.697138263Z",
{"log":"\u0009at io.lettuce.core.cluster.ClusterFutureSyncInvocationHandler.handleInvocation(ClusterFutureSyncInvocationHandler.java:123)\n","stream":"stderr","time":"2020-01-31T22:40:22.697144375Z","
{"log":"\u0009at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)\n","stream":"stderr","time":"2020-01-31T22:40:22.697147975Z",
{"log":"\u0009at com.sun.proxy.$Proxy255.setex(Unknown Source)\n","stream":"stderr","time":"2020-01-31T22:40:22.697151716Z",
{"log":"\u0009at org.springframework.data.redis.connection.lettuce.LettuceStringCommands.setEx(LettuceStringCommands.java:230)\n","stream":"stderr","time":"2020-01-31T22:40:22.697155286Z",
{"log":"\u0009... 46 more\n","stream":"stderr","time":"2020-01-31T22:40:22.697158933Z",
{"log":"Caused by: java.lang.InterruptedException\n","stream":"stderr","time":"2020-01-31T22:40:22.697423989Z","
{"log":"\u0009at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:385)\n","stream":"stderr","time":"2020-01-31T22:40:22.697440834Z",
{"log":"\u0009at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2093)\n","stream":"stderr","time":"2020-01-31T22:40:22.697449402Z",
{"log":"\u0009at io.lettuce.core.protocol.AsyncCommand.await(AsyncCommand.java:83
use redisTemplate.opsForValue().set( key,result ,DEFAULT_EXPIRE, TimeUnit.SECONDS); in a method.this method is wrapped by hystrix command

notify-keyspace-events in Redis

$
0
0

I'd like to work with Redis for manipulating sessions.But I get failure when running the spring boot app.So I guess that this error is coming from maven dependencies especially version conflicts.

Here is my maven dependencies:

<dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-test</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>io.lettuce</groupId>
            <artifactId>lettuce-core</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.session</groupId>
            <artifactId>spring-session-data-redis</artifactId>
        </dependency>
        <dependency>
            <groupId>biz.paluch.redis</groupId>
            <artifactId>lettuce</artifactId>
            <version>4.3.1.Final</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.session</groupId>
            <artifactId>spring-session</artifactId>
            <version>1.3.3.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>com.github.kstyrc</groupId>
            <artifactId>embedded-redis</artifactId>
            <version>0.6</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context</artifactId>
            <version>${spring.version}</version>
        </dependency>


    </dependencies>

I tried to avoid notify-keyspace-events bug by integrating embedded-redis dependency in pom.xml but without success.

Notice that I added two dependencies in above pom.xml which there artifactId are spring-session-data-redis and lettuce-core.These dependencies are respectively responsible for Redis connection and ensuring thread safety for session connections.

Below the config class for redis http session:

@Configuration
@EnableRedisHttpSession
public class HttpSessionConfig {
    @Bean
    public LettuceConnectionFactory connectionFactory() {
        return new LettuceConnectionFactory(); 
    }

}

Also I configured http session management by using spring-session in below component:

import org.springframework.session.web.http.HeaderHttpSessionStrategy;
import org.springframework.session.web.http.HttpSessionStrategy;

    @Configuration
    @EnableWebSecurity
    public class SecurityConfig extends WebSecurityConfigurerAdapter {
        //some code here
        public HttpSessionStrategy httpSessionStrategy() {
                return new HeaderHttpSessionStrategy();
            }
    }

But when I run spring boot app, I get this following runtime error:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'enableRedisKeyspaceNotificationsInitializer' defined in class path resource [org/springframework/session/data/redis/config/annotation/web/http/RedisHttpSessionConfiguration.class]: Invocation of init method failed; nested exception is org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: ERR Unsupported CONFIG parameter: notify-keyspace-events
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1699) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:573) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:495) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:759) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869) ~[spring-context-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) ~[spring-boot-2.0.5.RELEASE.jar:2.0.5.RELEASE]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:780) [spring-boot-2.0.5.RELEASE.jar:2.0.5.RELEASE]
    at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:412) [spring-boot-2.0.5.RELEASE.jar:2.0.5.RELEASE]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:333) [spring-boot-2.0.5.RELEASE.jar:2.0.5.RELEASE]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1277) [spring-boot-2.0.5.RELEASE.jar:2.0.5.RELEASE]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1265) [spring-boot-2.0.5.RELEASE.jar:2.0.5.RELEASE]
    at com.example.demo.BookStoreApplication.main(BookStoreApplication.java:29) [classes/:na]
Caused by: org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: ERR Unsupported CONFIG parameter: notify-keyspace-events
    at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:54) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:52) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy.java:44) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:42) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.connection.lettuce.LettuceConnection.convertLettuceAccessException(LettuceConnection.java:257) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.connection.lettuce.LettuceServerCommands.convertLettuceAccessException(LettuceServerCommands.java:571) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.connection.lettuce.LettuceServerCommands.setConfig(LettuceServerCommands.java:332) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.data.redis.connection.DefaultedRedisConnection.setConfig(DefaultedRedisConnection.java:1126) ~[spring-data-redis-2.0.10.RELEASE.jar:2.0.10.RELEASE]
    at org.springframework.session.data.redis.config.ConfigureNotifyKeyspaceEventsAction.configure(ConfigureNotifyKeyspaceEventsAction.java:70) ~[spring-session-data-redis-2.0.6.RELEASE.jar:2.0.6.RELEASE]
    at org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration$EnableRedisKeyspaceNotificationsInitializer.afterPropertiesSet(RedisHttpSessionConfiguration.java:286) ~[spring-session-data-redis-2.0.6.RELEASE.jar:2.0.6.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1758) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1695) ~[spring-beans-5.0.9.RELEASE.jar:5.0.9.RELEASE]
    ... 16 common frames omitted
Caused by: io.lettuce.core.RedisCommandExecutionException: ERR Unsupported CONFIG parameter: notify-keyspace-events
    at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:118) ~[lettuce-core-5.0.5.RELEASE.jar:na]
    at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:109) ~[lettuce-core-5.0.5.RELEASE.jar:na]
    at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:598) ~[lettuce-core-5.0.5.RELEASE.jar:na]
    at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:556) ~[lettuce-core-5.0.5.RELEASE.jar:na]
    at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:508) ~[lettuce-core-5.0.5.RELEASE.jar:na]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:628) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:563) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) ~[netty-transport-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) ~[netty-common-4.1.29.Final.jar:4.1.29.Final]
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.29.Final.jar:4.1.29.Final]
    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_144]

Please any help is very appreciated.Thanks in advance for your reply.

Redis - How to configure custom conversions

Redis trying to connect to localhost when multiple remote connections are specified in Micronaut configuration

$
0
0

I have multiple REDIS configs looking like this:

redis:
  servers:
    dev-redis-1:
      uri: # redis url
      ssl: true
      timeout: 60s
    dev-redis-2:
      uri: # redis url
      ssl: true
      timeout: 60s  

In my bean I use it like this:

  @Inject
  @Named("dev-redis-2")
  StatefulRedisConnection<String, String> redisConnection;  

This is how it is mentioned in the docs
A similar bug report was opened in GitHub

But instead of trying to connect to the server at dev-redis-2 it is trying to connect to localhost:6379.

Micronaut: 1.2.9
Java 11

Seems like the bug hasn't been fixed but I'm not sure.

Does a fix exist to achieve my target or should I open an issue on GitHub?

Spring Boot Redis: Caching Objects from Backend Services for parallel consumer requests for the same Object

$
0
0

i didn't know how to explain my challenge better in the title.

The current setup is, that I'm using Spring Rest Service as a "middleware layer" in order to transform the backend's data, which is a huge response, in a more "friendly" way.

This is the for example the structure of the backend:

public class Customer
{
   private String id;
   private String bankAccount;
   private String customerName;
}

The middleware is currently caching the response from the backend in redis. It also has following endpoints:

public class ServiceController
{
   getBankAccountById(String id);
   getCustomerNameById(String id);
   getCustomerObjectById(String id);
}

All of the requests are producing a backend call for the customer object and cache it in redis, if no object is already present.

But in a multi-threaded environment and especially in an "cloud" / multi-instance environment if the consumer is doing n-requests (#1 getBankAccountById() #2 getCustomerNameById(), etc. in a parallel manner - exact same millisecond), that only one single request is fired against the "real" backend?

My Goal is something like putting a marker into redis that there will be an object of the type customer for a given id is in the cache in the near future, which results in blocking all other threads / threads of other instances in order to reduce backend calls.

My question, is there a simple or maybe an out of the box solution for that?

The only thing I found was the docs of Spring Boot and @Cacheable, which is not feasible, since it synchronizes the calls for a given Id within the same application, not service wide in a clustered environment.

And in addition to this, the backend is really slow (5-10 seconds per call), that the @Cacheable annotation is already passed by since it's starts writing to cache when the actual @Cachable Object is returned.

Thanks in advance, Cheers Alex

//Edit: I meant that @Cachable(sync=true) is only affecting the instance level and not a distributed level. Therefore it doesn't really make sense here from my understanding.

Spring Boot Redis: Distributed Caching of Objects from Backend Services for parallel consumer requests for the same Object

$
0
0

i didn't know how to explain my challenge better in the title.

The current setup is, that I'm using Spring Rest Service as a "middleware layer" in order to transform the backend's data, which is a huge response, in a more "friendly" way.

This is the for example the structure of the backend:

public class Customer
{
   private String id;
   private String bankAccount;
   private String customerName;
}

The middleware is currently caching the response from the backend in redis. It also has following endpoints:

public class ServiceController
{
   getBankAccountById(String id);
   getCustomerNameById(String id);
   getCustomerObjectById(String id);
}

All of the requests are producing a backend call for the customer object and cache it in redis, if no object is already present.

But in a multi-threaded environment and especially in an "cloud" / multi-instance environment if the consumer is doing n-requests (#1 getBankAccountById() #2 getCustomerNameById(), etc. in a parallel manner - exact same millisecond), that only one single request is fired against the "real" backend?

My Goal is something like putting a marker into redis that there will be an object of the type customer for a given id is in the cache in the near future, which results in blocking all other threads / threads of other instances in order to reduce backend calls.

My question, is there a simple or maybe an out of the box solution for that?

The only thing I found was the docs of Spring Boot and @Cacheable, which is not feasible, since it synchronizes the calls for a given Id within the same application, not service wide in a clustered environment.

And in addition to this, the backend is really slow (5-10 seconds per call), that the @Cacheable annotation is already passed by since it's starts writing to cache when the actual @Cachable Object is returned.

Thanks in advance, Cheers Alex

//Edit: I meant that @Cachable(sync=true) is only affecting the instance level and not a distributed level. Therefore it doesn't really make sense here from my understanding.

Is it possible to customize serialization used by the Spring Cache abstraction?

$
0
0

I have a Java web service that uses Redis for caching. Initially I created a CacheService that directly accessed the Redisson client in order to handle caching. I recently refactored the cache handling to use the Spring Cache abstraction, which made the code a lot cleaner and encouraged modular design. Unfortunately Spring uses Jackson to serialize/deserialize the cached objects, resulting in the cached values being much larger than before due to type info being stored in the JSON. This caused an unacceptable increase in response time in reads from the cache. Is there any way to customize the way that Spring serializes and deserializes the cached content? I'd like to replace it with my own logic, but don't see anything in the docs. I'd rather not have to roll my own AspectJ cache implementation if possible.

Why redisTemplate.opsForValue().get() always not null?

$
0
0

I use

@Autowired private RedisTemplate < String, Object > redisTemplate;

and

redisTemplate.opsForValue().get(key);

IDE gives me a warning telling the result always not null,

but I see the method V get(Object key) is marked as @Nullable and I think the V get(Object key) should return null when there is no such key exist.

enter image description here

Any idea for me, Thanks!


Spring Redis Get Values by Wildcard Keys

$
0
0

I am using Spring Data RedisTemplate (not Repository). Everything works fine with

template.opsForValues().get("mykey:1")

But I have some more complex keys such as "myobject:somesituation:1" and "myobject:anothersituation:2" and so on. I need to do something like:

template.opsForValues().get("myobject:somesituation:*")

With the wildcard, I would like to get all values in the "somesituation", no matter what is its id.

Using redis command line, I have no problem.

Obs.: I am using reactive template, don't know(believe) if this could be the problem. Obs2: After a research, I have just found posts about Spring Repository, get all keys, get by command line, etc. But not about my specific problem.

jedispool getResource consumes too much latency

$
0
0

I have a function to create a jedispool,

    final JedisPoolConfig poolConfig = new JedisPoolConfig();
    poolConfig.setMaxTotal(25);
    poolConfig.setMaxIdle(20);
    poolConfig.setMinIdle(20);
    poolConfig.setTestOnBorrow(false);
    poolConfig.setTestOnCreate(true);
    poolConfig.setTestOnReturn(true);
    poolConfig.setTestWhileIdle(false);
    poolConfig.setMinEvictableIdleTimeMillis(-1);
    poolConfig.setTimeBetweenEvictionRunsMillis(-1); // don't evict
    poolConfig.setNumTestsPerEvictionRun(-1);
    poolConfig.setBlockWhenExhausted(false);
    poolConfig.setLifo(false);
    return poolConfig;

I am getting client from pool using below. I see there is high latency even in 100- 500 ms at times and some times at 30 ms to get the client from pool. I am testing with 52 rps with configuration of 20 idle connections & 25 max connections.. Capable to handle ~600rps.. Any idea like what causes latency? If there any tweak at pool level to validate?

    Instant instant = Instant.now();
    Instant getFromPool = null;
    try (final Jedis jedis = jedisPool.getResource()){
        getFromPool = Instant.now();
    }

Exception in thread "main" redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster

$
0
0

I am trying to connect to Redis cluster which is installed in a linux box using java to store a JSON string.

Code :

public JedisCluster getRedisCluster(){


    Set<HostAndPort> jedisClusterNode = new HashSet<HostAndPort>();
    jedisClusterNode.add(new HostAndPort("redis-test-cluster1", 6379));

    JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
    jedisPoolConfig.setMaxTotal(10);
    jedisPoolConfig.setMaxIdle(10);
    jedisPoolConfig.setMaxWaitMillis(10000);
   jedisPoolConfig.setTestOnBorrow(true); 
    JedisCluster jedisCluster = new JedisCluster(jedisClusterNode, 10000, 1,10, "passwordString",jedisPoolConfig);
    return jedisCluster;
}



    public static void main(String[] args) {
    String jsonString = new String("{\"Test1\": \"data1\", \"Test2\": 42}");

    Map<String,String> map = new HashMap<String,String>();
    map.put("testjson",jsonString);

    JedisCluster jedisCluster = new RedisJavaClient().getRedisCluster();
    jedisCluster.hmset("idtest",map);

     String value = jedisCluster.hget("testjson","idtest");
     System.out.println("value passed : "+value);



}

And i am getting the below exception :

Exception in thread "main" redis.clients.jedis.exceptions.JedisNoReachableClusterNodeException: No reachable node in cluster at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnection(JedisSlotBasedConnectionHandler.java:69) at redis.clients.jedis.JedisSlotBasedConnectionHandler.getConnectionFromSlot(JedisSlotBasedConnectionHandler.java:86) at redis.clients.jedis.JedisClusterCommand.runWithRetries(JedisClusterCommand.java:102) at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:25) at redis.clients.jedis.JedisCluster.hmset(JedisCluster.java:513) at com.connection.jedisclient.Test.main(Test.java:25)

I am able to set it manually in the cluster as below :

127.0.0.1:6379> hmset users jsontest "{\"Test1\": \"data1\", \"Test2\": 42}"

OK

127.0.0.1:6379> hget users jsontest

"{\"Test1\": \"data1\", \"Test2\": 42}"

How to solve 429 Too many Requests

$
0
0

My spring application uses a RedisList to get and send some notifications via restTemplate.
In addition, I'm using a firebase function to do this.
The problem is that when i try to send more than 50 notifications in a min I obtain this error:

org.springframework.web.client.HttpClientErrorException$TooManyRequests: 429 Too Many Requests at org.springframework.web.client.HttpClientErrorException.create(HttpClientErrorException.java:97) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:123) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:102) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:785) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:743) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:677) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:586) ~[spring-web-5.2.1.RELEASE.jar:5.2.1.RELEASE] at it.visualsoftware.notificator.RestTemplate.RestTemplateService.SendNotification(RestTemplateService.java:46) ~[classes/:na] at it.visualsoftware.notificator.redis.RedisQueueSend.listener(RedisQueueSend.java:54) ~[classes/:na]

Is there a way to increase this limitation? does is it related to Firebase or restTemplate?
Thank you for your help.

@user_script:1: WRONGTYPE Operation against a key holding the wrong kind of value

$
0
0

Following is my lua script

if redis.call('sismember',KEYS[1],ARGV[1])==1
then redis.call('srem',KEYS[1],ARGV[1])
else return 0
end
store = tonumber(redis.call('hget',KEYS[2],'capacity'))
store = store + 1
redis.call('hset',KEYS[2],'capacity',store)
return 1

when I run this srcipt in Java, An exception like

@user_script:1: WRONGTYPE Operation against a key holding the wrong kind of value

throws, the Java code is like

Object ojb = jedis.evalsha(sha,2,userName.getBytes(),
                id.getBytes(),id.getBytes()) ;

where userName is "tau" and id is "002" in my code, and I test the type of "tau" and "002" as follows,

127.0.0.1:6379> type tau
set
127.0.0.1:6379> type 002
hash

and exactly, the content of them are :

127.0.0.1:6379> hgetall 002
name
"鏁版嵁搴撲粠鍒犲簱鍒拌窇璺?
teacher
"taochq"
capacity
54
127.0.0.1:6379> smembers tau
002
004
001
127.0.0.1:6379>

Now I'm so confused and don't know what's wrong, any help will be grateful

Viewing all 2203 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>