Thursday, 17 April 2025

10 years Java8 SpringBoot Interview questions

 

Interview Questions.


Self Introduction



Hi Name,

Good Morning!


My name is Prabhu Prasad, and I am a Tech Lead with over  10 years of experience in software development. I specialize in designing and developing robust business applications using Java, SpringBoot Microservices, and REST APIs. My expertise extends to requirement analaysis, testing, deployment and leveraging AI tools like GitHub Copiot and LLM.

 

DSA 



https://dev.to/iuliagroza/complete-introduction-to-the-30-most-essential-data-structures-algorithms-43kd


I. Data Structures

  1. Arrays
  2. Linked Lists
  3. Stacks
  4. Queues
  5. Maps & Hash Tables
  6. Graphs
  7. Trees
  8. Binary Trees & Binary Search Trees
  9. Self-balancing Trees (AVL Trees, Red-Black Trees, Splay Trees)
  10. Heaps
  11. Tries
  12. Segment Trees
  13. Fenwick Trees
  14. Disjoint Set Union
  15. Minimum Spanning Trees

II. Algorithms

  1. Divide and Conquer
  2. Sorting Algorithms (Bubble Sort, Counting Sort, Quick Sort, Merge Sort, Radix Sort)
  3. Searching Algorithms (Linear Search, Binary Search)
  4. Sieve of Eratosthenes
  5. Knuth-Morris-Pratt Algorithm
  6. Greedy I (Maximum number of non-overlapping intervals on an axis)
  7. Greedy II (Fractional Knapsack Problem)
  8. Dynamic Programming I (0–1 Knapsack Problem)
  9. Dynamic Programming II (Longest Common Subsequence)
  10. Dynamic Programming III (Longest Increasing Subsequence)
  11. Convex Hull
  12. Graph Traversals (Breadth-First Search, Depth-First Search)
  13. Floyd-Warshall / Roy-Floyd Algorithm
  14. Dijkstra's Algorithm & Bellman-Ford Algorithm
  15. Topological Sorting
Programming questions

Hashtable internal

equals and hashcode

streams

SingelTon Class
public class Singleton {
private static volatile Singleton instance;

private Singleton() {
// Initialization code, if needed
}

// Static method to get the instance
public static Singleton getInstance() {
if (instance == null) {
synchronized (Singleton.class) {
if (instance == null) {
instance = new Singleton();
}
}
}
return instance;
}
}


public class Main {
public static void main(String[] args) {
// Get the Singleton instance
Singleton singleton = Singleton.getInstance();

// Use the Singleton
// ...
}
}

 Immutbale class


To create an immutable class in Java, you need to follow these general principles:

  1. Declare the class as final so it can’t be extended.
  2. Make all of the fields private so that direct access is not allowed.
  3. Don’t provide setter methods for variables.
  4. Make all mutable fields final so that a field’s value can be assigned only once.
  5. Initialize all fields using a constructor method performing deep copy.
  6. Perform cloning of objects in the getter methods to return a copy rather than returning the actual object reference.


import java.util.HashMap;

import java.util.Iterator;


public final class FinalClassExample {


// fields of the FinalClassExample class

private final int id;

private final String name;

private final HashMap<String,String> testMap;


public int getId() {

return id;

}


public String getName() {

return name;

}


// Getter function for mutable objects


public HashMap<String, String> getTestMap() {

return (HashMap<String, String>) testMap.clone();

}


// Constructor method performing deep copy

public FinalClassExample(int i, String n, HashMap<String,String> hm){

System.out.println("Performing Deep Copy for Object initialization");


// "this" keyword refers to the current object

this.id=i;

this.name=n;


HashMap<String,String> tempMap=new HashMap<String,String>();

String key;

Iterator<String> it = hm.keySet().iterator();

while(it.hasNext()){

key=it.next();

tempMap.put(key, hm.get(key));

}

this.testMap=tempMap;

}


// Test the immutable class


public static void main(String[] args) {

HashMap<String, String> h1 = new HashMap<String,String>();

h1.put("1", "first");

h1.put("2", "second");

String s = "original";

int i=10;

FinalClassExample ce = new FinalClassExample(i,s,h1);

// print the ce values

System.out.println("ce id: "+ce.getId());

System.out.println("ce name: "+ce.getName());

System.out.println("ce testMap: "+ce.getTestMap());

// change the local variable values

i=20;

s="modified";

h1.put("3", "third");

// print the values again

System.out.println("ce id after local variable change: "+ce.getId());

System.out.println("ce name after local variable change: "+ce.getName());

System.out.println("ce testMap after local variable change: "+ce.getTestMap());

HashMap<String, String> hmTest = ce.getTestMap();

hmTest.put("4", "new");

System.out.println("ce testMap after changing variable from getter methods: "+ce.getTestMap());


}


}

find count of each letter occureanc ein java 8 and old version.

algoriths basics just refresh.

 

Java 8 features

Streams

https://javatechonline.com/java-8-features/

employees.forEach(e->System.out.println(e.getName() + ""+ e.getSalary()));

employess.stream().forEach(System.out::println);

List<Employess> employessDev = employees.stream().filter(e->e.getDept().equals("Development) && e.getSalary()>10000).collect(Collectors.toList());

//Unique Elements

Set<Employess> employessDev = employees.stream().filter(e->e.getDept().equals("Development) && e.getSalary()>10000).collect(Collectors.toSet());


Map<Integer, String> mapEmployee =-employees.stream().filter(e->e.getDept().equals("Development) && e.getSalary()>10000).collect(Collectors.toMap(Employee::getId, Employee::getName));

System.out.println(mapEmployee);


empolyees.stream().map(e->e.getDep()).distinct().collect(Collectors.toList())

List<Stream<String> steam = employees.stream().map(e->e.getProjects().stream().map(e->e.getProejctIO)).collect(Collectors.toList());

//flatMap->To get the complex object nested objects-> can use FlatMap

List<String> flatMapList = employess.stream().flatMap(e->e.getProjects().stream().map(e->e.getProejctName())).collect(Collectos.toList())


Sysout(flatMapList);


//Sorted

List<Employee> sortedEmployees = employees.stream().sorted(Comparator.comparing(e->e.getSalary())).collect(Collectors.toList());

List<Employee> sortedEmployeesDesc = employees.stream().sorted(Collections.reverseOrder(Comparator.comparing(e->e.getSalary()))).collect(Collectors.toList());


//min && max

min = sortedEmployees.get(0)

max = sortedEmployeesDesc.get(0);

Optional<Employee> minSalEmployee = employees.stream().min(Comparator.comparingDouble(Employee::getSalary());

Optional<Employee> maxSalEmployee = employees.stream().max(Comparator.comparingDouble(Employee::getSalary());

//groupingBy

Map<String,List<Employes>> employeeGroup =employees.stream().collect(Collectors.groupingBy(Employee::getGender()));


Map<String,List<String>> employeeGroup =employees.stream().collect(Collectors.groupingBy(Employee::getGender(), Collectors.mapping(Employee::getName, Collectors.toList()));

Map<String,Long> count = employee.stream().collect(Collectors.groupinBy(Employee::getGeder, Collectors.counting()))

SysOut(count);

//findFirst


Optional<Employee>  findFirstElemtn = employees.stream().filter(e->e.getDept().equals("Development"))

.findFirst();

SysOut(findFirstElement.get());

if(findFirstElement.isPreset()){

SysOut(findFirstElelement.get());

}

findFirstElemtn.ifPresent(e->Systeprintln(e->e.getName()))

Optional

Employee  findFirstElemtn = employees.stream().filter(e->e.getDept().equals("Development"))

.findFirst().orElse(new Employee())

Employe>  findFirstElemtn = employees.stream().filter(e->e.getDept().equals("Development"))

.findFirst().orElseThrow(new IllegalArugmentException("Employee not found"));

//findAny ->Find any element from the stream


employees.stream().filter(e->e.getDept().equals("Development"))

.findAny()// in Parllel stream

//anyMatch(Predicate)

//allMatch(Predicate)

//isMatch(Predicate)

boolean anyMatch = employee.steam().anyMatch(e->e.getDept().equals("Devlopement")); true or false

if condition allmatching or not reutrns boolean.

//limit

List<Employee> top3Employees = employees

.stream()

.sorted(Comparator.comparing(Employee:getSaralry().reversed()))

limit(3)

.collect(Collectors.toList);

//skip(long) first n elements.//pagination purpose.
employees.stream().skip(5).collect(Collectors.toList());

//flatMap both transformation and flattening

flattens or merges muliple collections into signle

ex: Instead of List<List<String> list;

List<String>

map

[a,b,c,d][d,e,f]=[[a,b,c,d],[d,e,f]]

flatmap

[a,b,c,d][d,e,f]=[a,b,c,d,d,e,f]

employees.steam().flatMap(e->e.getCities()).stream().collect(Collectors.toList())

Intermediate Operations :

map()filter()distinct()sorted()limit()skip()


Terminal Operations :

forEach()toArray()reduce()collect()min()max()count()anyMatch()allMatch()noneMatch()findFirst()findAny()

Lamda -> anonymous method->NO name,access modifie, return type

to implment functioninternace

 

@FunctionalINteface to avoid to add muiltple mehtods

single abstarct method contains is Finctoin interface.

 In addition to single abstract method, we can have any number of

 default & static methods and even public methods of java.lang.Object class inside a Functional Interface.

 equlas

 hashcode

 toStirng

 wait

 notify

 notifyall

 getClass()

 

 

default and static mehtods in interface introduced.

default-> to avoid to change all implmented classe if new functionaliy added using defulat method implementaion in interface.

 

Abastact calsses

still lamdaexprssion are not for abstarct classes

can creat isntance variables and create consturesr and static and instace blokcs but not in interface.

 

Method referece and lamdaexprssions are to implemnte Functional Interfaces

Method Reference is the process of refering pre-exisng method using :: reference is called Method reference ex:

System.out::println

 

Optional

java.Util package

Optional<Object> emptyOptional = Optional.empty()

Optional<String> emailOoptianal = Optional.of(custome.getEmail());

Optional.ofNullable(customer.getEmail());

It check (if (customer.getEmail==null?empty (): of(custm.getEmail())));

To get the Otional value  use get() method.

If(emailOptiaon.isPresent()){

emialOpgioan.get()

}

emialOptiona.orElse(“default value”)

emailOptional.orElseThrow(()->new NoSuchElementError(“email not fouldn”))->Supplier Functional interface

emialOptiona.orElseGet(()->“default value”)

filter()->accepts Predicate Function interface

forEach()-> accepts Consumer Functioal interface

->Function Functional Interface.

 

 

 

Memory allocation.










And Permanent Generation-> JVM metadata-. Class structure,Class metadata and all part of heap memroy and fixed one.

Meta space is out of HEAP and can increase based on situation. To avoid outofmemeory issues.

Programs

Count the no of occurances of chars in a given string in java8 streams

String input = "Hello";

Map<Character, Long> frequencyOfChars = input.chars().mapToObj(c->(char)c).collect(Collectors.groupingBy(Function.indentity(), Collectors.counting()));

========

List<String> words = Arrasy.asList("abc", "def");

Map<Character, Long> charFrequency = words.stream() //Stream<String>
        .flatMap(a -> a.chars().mapToObj(c -> (char) c)) // Stream<Character>
        .collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));

frequencyOfChars.foreach((k,v)->System.out.println(k+":"+v));


=====================

  1. String str= "Communication";  
  2. Map<String, Long> result = Arrays.stream(str.split("")).map(String::toLowerCase).collect(Collectors.groupingBy(s -> s, LinkedHashMap::new, Collectors.counting()));  
  3. System.out.println(result); 

==================

Springboot Autocofiguration

 Java Desing Patterns

Creational Patterns:

  • Factory, Abstract Factory, Builder, Prototype, Singleton

Structural Patterns:

  • Adapter, Bridge, Composite, Decorator, Facade, Flyweight, Proxy


Behavioral Patterns:

  • Chain of Responsibility, Command, Interpreter, Iterator, Mediator, Memento, Observer, State, Strategy, Template Method, Visitor



Design patterns

SOLID Design principles

S -> Single Responsibility Principle

Every Java class must have single functionality ex: for ptinting passbook sepearet class, for loans seperate class, for sending otp notifications seperate class.

O -> Open & Close Principle

Open for extension & Closed for modification

Create interface with notification otp an report

implement in EmailNOtificatin, Whatsappnotification and MobileMessageNotification classes and implement Notification interface and its methods. sendOtp,sendRepots.

L -> Leskov's Substitution Priciple

Derived class must be completely substitutable for their base classes. In other words, if Class A is a subtype of class B, then we should be able to replace B with A without interrupting behavior of the program. For this can create multiple interfaces with related abstract methods and related Calsses can implement only those related interfaces. so can subclass substitues with baseclass or interface.

ex: chat, share phots, video call, publishpost

Interface1 chat and share methods

Interface2 video call, publishpost

Insta class implents both interafaces

Whatsapp class implemnts Interface1

Facebook class implemts both intrfaces


I -< Interface Segregation Principle

Split larger Interface into smaller  ones. We should not force the client to use the methods that they dont want to use.-> similar to single responsibility.

ex: UpiPaymentInterace, CashBackInterface

Gpay implemtns both-> sendmOny, scarchcard, cashback

PhonPe only UpiPaymentsINteraec->No cash back not forcing. to implment CashBackintefae by splitting it into different interface.

D-> Dependency Inversion Principle

Don't tighly couple with the concrete calssess, instaed create interface and make it loosly coupled

ex: public class ShoppingMall{

private Debitcatd debitcatrd;

ShoppingMall(Debitcard debitcare){

this.debitcard = debitcard

} doPurchagetn(debitcard){

creditcard.DoTransaction()

main(){

creaet Debit instance or Creditcard instance and call doTrancaitno 

but when we write intefce and implemnt in Debitcasll and Creditcat and imoment DoTransactinmethod and in ShoppingMall class invoke what ever want.


Microservices design patterns 

Decomposition

Strangler

API Gateway pattern






Outh2->JWT token

https://www.autodraw.com/share/0NGELIR3UDHB

SpringBootSecurity

Till 2.7
public Class SecurityConfig extends WebSecurityConfigurerAdapter{

@Override
public void configure(AuthenticationManagerBuilder  auth)thrws Exception{
auth.inMemoryAuthentication()
    .withUser()
    .password()
.roles();

auth.inMemoryAuthentication()
    .withUser()
    .password()
.roles();
}

public void configure(HttpSecurity  http)thrws Exception{
http.csrf().disable()
.authoriseRequests().antMatchers("/products/welcome").permitAll()
.and
.authoriseRequests().antMatchers("/products/getAllProducts").authenticated()
.and.httpBasic();
}

Aggregator pattern

Database per service

Shared database per service

CommandQueryResponsiblitySegregation(CQRS)

Keep Read and Write Opeations(Mircro services in different or seggregated)

to get the high throught put or out come. can be flexible for scaling.

ex: Flipkart or amazon sale offer-> search(read opreations are more)->Write operations are less(buy product)-> to keep both servies in sync use Messaging system(Kafka).

Saga Pattern-> Choreography Saga pattern -> introduces Kafka in betwen microserives instead of using multiople Http requests.

MicroServices:

Log Aggregation

Externalized configuration

Service Registry & Service Discovery

Circuite Breaker

Loose coupling

High Cohesion

Side car pattern


Micro services Fallout Scenrios handling


For system or data faliure reqeusts 

Can maintain Deadletter queru and Fallout queue so that can reprocess them after few retries and to unblock other reqeusts.

Understand scope of the problem

    Health checks: actuator /health/status

    @Component     public class MyCustomHealthIndicator implements HealthIndicator {     @Override      public Health health() {

1. Understand the Scope of the Problem:

  • Detection: The first step is to detect that a microservice is down. This requires robust monitoring and alerting.
    • Health Checks: Implement health check endpoints in each microservice (e.g., /health/status). These endpoints should return a 200 OK status if the service is healthy and an error status if it's not.
    • Monitoring Tools: Use monitoring tools (e.g., Prometheus, Grafana, Datadog, New Relic) to periodically check the health check endpoints of each service.
    • Alerting: Configure alerts to notify you when a service's health check fails or when other metrics (e.g., CPU usage, memory usage, error rate) exceed a certain threshold.
  • Isolation: Ensure that the failure of one microservice doesn't cascade and bring down other services.

2. Implement Fault Tolerance Patterns:

  • Circuit Breaker:
    • Purpose: To prevent a client from repeatedly trying to call a failing service, which can waste resources and potentially overload the failing service.
    • How it Works: The circuit breaker monitors the number of failures. If the failure rate exceeds a certain threshold, the circuit breaker "opens," and subsequent calls to the service are immediately failed without even attempting to connect. After a certain amount of time, the circuit breaker enters a "half-open" state, where it allows a limited number of calls to the service to see if it has recovered. If the calls succeed, the circuit breaker "closes" and normal operation resumes.
    • Libraries: Hystrix (Netflix, now in maintenance mode), Resilience4j (more actively maintained).
  • Retry:
    • Purpose: To automatically retry failed requests to a service, in case the failure was transient (e.g., a temporary network glitch).
    • How it Works: When a request fails, the client automatically retries the request after a certain delay. The delay can be fixed or exponential (increasing with each retry).
    • Considerations: Be careful about retrying requests that are not idempotent (i.e., requests that have side effects). Retrying a non-idempotent request could lead to unintended consequences (e.g., creating duplicate orders).
  • Fallback:
    • Purpose: To provide an alternative response when a service is unavailable.
    • How it Works: When a request fails, the client executes a fallback function that provides a default value, retrieves data from a cache, or performs some other action.
    • Example: If a product catalog service is down, the client could display a cached version of the catalog or a default set of products.
  • Bulkhead:
    • Purpose: To isolate different parts of your application so that a failure in one part doesn't affect other parts.
    • How it Works: The bulkhead pattern limits the number of concurrent calls to a service. This prevents a single failing service from consuming all of the resources of the calling service.
  • Rate Limiting:
    • Purpose: To prevent a client from overwhelming a service with too many requests.
    • How it Works: The rate limiting pattern limits the number of requests that a client can make to a service within a certain time period. This can help to protect services from denial-of-service attacks and prevent them from becoming overloaded.

3. Design for Idempotency:

  • Definition: An operation is idempotent if it can be executed multiple times without changing the result beyond the initial application.
  • Importance: Designing your services to be idempotent makes it much easier to handle failures and retries. If an operation is idempotent, you can safely retry it without worrying about unintended side effects.
  • Example: An operation to set the quantity of an item in a shopping cart to 5 is idempotent. An operation to increment the quantity of an item in a shopping cart is not idempotent.

4. Implement Asynchronous Communication:

  • Message Queues: Use message queues (e.g., RabbitMQ, Kafka, ActiveMQ) to decouple services and enable asynchronous communication.
    • Benefits: If a service is down, the messages will be queued and delivered when the service comes back online. This improves the resilience of the system and prevents data loss.
  • Eventual Consistency: Embrace eventual consistency, which means that data might not be immediately consistent across all services, but it will eventually become consistent.

5. Use Service Discovery and Load Balancing:

  • Service Discovery: Use a service discovery mechanism (e.g., Consul, etcd, ZooKeeper, Kubernetes DNS) to allow services to dynamically locate each other.
  • Load Balancing: Use a load balancer (e.g., Nginx, HAProxy, Kubernetes Service) to distribute traffic across multiple instances of a service.
    • Benefits: If one instance of a service goes down, the load balancer will automatically redirect traffic to the remaining instances.

6. Implement Proper Monitoring and Logging:

  • Centralized Logging: Use a centralized logging system (e.g., ELK stack, Splunk) to collect and analyze logs from all services.
  • Distributed Tracing: Use distributed tracing (e.g., Jaeger, Zipkin) to track requests as they flow through the system.
    • Benefits: This makes it easier to identify the root cause of failures and to monitor the performance of the system.

7. Use Container Orchestration (e.g., Kubernetes):

  • Self-Healing: Container orchestration platforms like Kubernetes provide self-healing capabilities, such as automatically restarting failed containers.
  • Scaling: They also make it easy to scale services up or down based on demand.
  • Service Discovery: Kubernetes provides built-in service discovery and load balancing.

8. Consider a Service Mesh (e.g., Istio, Linkerd):

  • Traffic Management: Service meshes provide advanced traffic management capabilities, such as traffic shaping, canary deployments, and A/B testing.
  • Security: They also provide security features, such as mutual TLS authentication and authorization policies.
  • Observability: Service meshes enhance observability by providing detailed metrics and traces.

Example Scenario (Order Service Depends on Payment Service):

  1. Payment Service Down: The Payment Service becomes unavailable.
  2. Order Service Detects Failure: The Order Service's health checks to the Payment Service start failing.
  3. Circuit Breaker Opens: The Circuit Breaker in the Order Service opens, preventing further calls to the Payment Service.
  4. Fallback Mechanism: The Order Service uses a fallback mechanism:
    • If the payment was already attempted and failed, the Order Service displays an error message to the user, suggesting they try again later.
    • If the payment hasn't been attempted yet, the Order Service queues the order for later processing (using a message queue) and notifies the user that their order is pending.
  5. Monitoring and Alerting: The monitoring system alerts the operations team about the Payment Service failure.
  6. Payment Service Recovery: The operations team fixes the Payment Service issue, and it comes back online.
  7. Circuit Breaker Closes: The Circuit Breaker in the Order Service closes, and normal communication with the Payment Service resumes.
  8. Queued Orders Processed: The Order Service processes the queued orders.

Key Takeaways:

  • Proactive Monitoring: Invest in robust monitoring and alerting to detect failures quickly.
  • Embrace Fault Tolerance: Implement fault tolerance patterns to prevent failures from cascading and to provide a good user experience.
  • Design for Resilience: Design your services to be resilient to failures. This includes using asynchronous communication, service discovery, and load balancing.
  • Automate Recovery: Automate as much of the recovery process as possible to reduce downtime.
  • Continuous Improvement: Continuously monitor and improve your failure handling strategies.




 Challenges over microservices

Complexity

  • Increased Operational Overhead: Dealing with a distributed system is inherently more complex than managing a monolithic application. You have to handle inter-service communication, distributed transactions, and more.
  • Debugging Challenges: Tracing requests across multiple services can be difficult. You need robust logging and monitoring to understand the flow of data and pinpoint issues.
  • Deployment Complexity: Deploying and managing numerous services requires mature DevOps practices, including CI/CD pipelines, containerization (like Docker), and orchestration (like Kubernetes).

Development and Design

  • Distributed Data Management: Maintaining data consistency across multiple services can be tricky. You might need to implement patterns like eventual consistency, which can add complexity to your application logic.
  • Increased Development Effort: Breaking down a monolith into microservices requires careful planning and design. Each service needs to be independently deployable and maintainable.
  • Inter-Service Communication: Choosing the right communication protocol (e.g., REST, gRPC, message queues) and handling network latency, failures, and retries are critical considerations.

Performance and Security

  • Increased Latency: Communication between services adds overhead. Network latency can become a bottleneck, especially if services are chatty.
  • Security Concerns: Securing inter-service communication is crucial. You need to implement authentication, authorization, and encryption to protect your data.

Organizational Challenges

  • Team Autonomy and Coordination: Microservices require autonomous teams that can independently develop and deploy their services. However, coordination between teams is still essential to ensure overall system coherence.
  • Skill Requirements: Teams need expertise in distributed systems, DevOps, and various technologies. This can require significant training and upskilling.

Cost

  • Infrastructure Costs: Running multiple services often requires more infrastructure resources than a monolithic application.
  • Monitoring and Logging Costs: Comprehensive monitoring and logging are essential for managing microservices, which can add to the overall cost.

find first nth salary department wise queries

normalization basics

 JWT TOCKEN SpringSecurity-> GCP cloud run

GCP apigee gatway

document db basics

install docment db and nosql

 

check for AWS serices

ECS, S3, TERRFORM, LAMDA

k8 kubeclt commands

to create build and push and pull commands

deploy

get

scale

replica

logs

all kubectl commands

service,

deply file

pods files and all.

configmap files.

 

Hibernate caches

Client-> 1time queried DB

            ->Same session 2 time quired DB-> fetch data it from 1st level cache.->HIbernate provides it by default.

Same User or Another user in different or 2nd session-> same query-> it doest not have data in first level cache of session2 so it agian hits db.-> need to go for 2nd level cache.

->All the sessions can use 2nd level cache-> thirdparty caches ehcache, OS, swam

Jpa save and persist methods and all..

@Persistance context by EntityManager

creates or saves new reocrd, if any already record exist with id-> thorows exception, doesn't update it.

doesn't not return any entity

Standard JPA 

 

 


 


Reactive programming->

Event looping

Unsynchronous and Nonblocking.

Can accept multiple request for a thread -> and subscribe to db  for respposne

DataFlow as eveng driven stream

Subscribe to db-> if any udpates happened to DB also we can get the all update data. Ex: Live Cricket score board.

Back pressure on data streams-> app can tell to db wait for some time until I process existing records.

PubSub Mono


Kafka: Distributed messaging system.

acks=0

acks=all or acks=-1

Properties props = new Properties(); props.put("bootstrap.servers", "kafka1:9092,kafka2:9092,kafka3:9092"); props.put("acks", "all"); props.put("retries", 3); props.put("enable.idempotence", "true"); props.put("max.in.flight.requests.per.connection", "1"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("delivery.timeout.ms", "120000");


ProducerRecord<String, String> record = new ProducerRecord<>("my-topic", "key", "value"); producer.send(record, (metadata, exception) -> {

Checklist for Ensuring Message Delivery:

  •  Set acks=all (or acks=-1) on the producer.
  •  Set retries to a reasonable value (e.g., 3-10) on the producer.
  •  Enable enable.idempotence=true on the producer (Kafka >= 0.11).
  •  Set max.in.flight.requests.per.connection=1 on the producer (in combination with idempotence and acks=all).
  •  Set delivery.timeout.ms to a sufficiently high value on the producer.
  •  Set replication.factor to 3 (or higher) on the topic.
  •  Set min.insync.replicas to 2 (or higher) on the topic.