Quantcast
Channel: Active questions tagged redis+java - Stack Overflow
Viewing all articles
Browse latest Browse all 2204

Inconsistent behaviour of lettuce when pooling different topics from redis server inside kubernetes

$
0
0

I am currently encountering some issues when trying to pool from two different topics of a Redis server via lettuce inside a Spring Boot application inside a Kubernetes cluster. Whereas pooling messages from the first topic is always working, the second one only seems to be working by chance when restarting the Kubernetes cluster (which may suggest it has something to do with the deployment maybe).

The messages are correctly published (they are visible inside redis-insight) and both pooling classes use the same Java code to pool via lettuce which makes it weird that only one is working. The pooling is implemented as can be seen in the following:

 @PostConstruct    @Override    protected void connectToRedis() {        RedisClient redisClient = RedisClient.create("redis://" + REDIS_ADDRESS +":" + REDIS_PORT);        StatefulRedisConnection<String, String> connection = redisClient.connect();        session = connection.sync();        initialize(session);    }    @Override    @Scheduled(initialDelay = INITIAL_DELAY_MS, fixedDelay = FIXED_DELAY_MS)    public void pullEvents(){        System.out.println("pullEvents for member service entered");        for (String key : STREAMS_KEYS) {            Map<String, String> jsonEvents = pullJSONs(key);            for (Map.Entry<String, String> entry : jsonEvents.entrySet()) {                readEvents(entry.getValue(), key, entry.getKey());            }        }    }

As mentioned above, my application runs inside a Kubernetes cluster alongside other microservices. My application listens to my own events that I publish (for CQRS) as well as to the events from another microservice that publishes to the same redis server with a different topic name. All microservices have a service and deployment yaml inside their respective folder for redis like that (it always looks the same because we wanted to use one redis instance and I already checked that only one redis instance is running via kubectl get all):

redisService.yaml:

apiVersion: v1kind: Servicemetadata:   name: redis  labels:     app: redisspec:   selector:     app: redis  ports:     - name: redis      port: 6379

redisDeploymnt.yaml

apiVersion: apps/v1kind: Deploymentmetadata:  name: redis  labels:    app: redisspec:  replicas: 1  selector:    matchLabels:      app: redis  template:    metadata:      labels:        app: redis    spec:      containers:        - name: redis          image: redis:alpine          ports:          - containerPort: 6379

I am really struggling to find the cause, especially as it randomly does work. My guess is that it has something to do with the order in which Kubernetes starts each container.

Thanks in advance!


Viewing all articles
Browse latest Browse all 2204

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>