Interview — Senior Java Backend Engineer Difficulty: hard Started: 2026-04-28T15:54:57.205Z Ended: 2026-04-28T16:11:24.883Z Score: 4.2/10 (no-hire) Strengths: - Correctly explained REQUIRED vs REQUIRES_NEW with a concrete MahaRERA audit-log example — showed practical application, not just textbook knowledge (turns 9, 13). - Accurately distinguished orElse (eager evaluation) from orElseGet (lazy evaluation) and correctly identified the performance implication for expensive operations like DB calls (turn 7). - Correctly identified the Java 7 infinite-loop/circular-linked-list risk in HashMap under concurrent rehashing, and named ConcurrentHashMap, synchronized map, and ReentrantLock as remedies (turn 15). - Demonstrated awareness of ConcurrentHashMap's evolution from segment locking (Java 7) to CAS + bucket-level locking (Java 8), which is a non-trivial internal detail (turn 17). - Self-proxy @Transactional limitation was identified correctly — calling a transactional method within the same class bypasses the Spring proxy and the annotation has no effect (turn 11). Weaknesses: - N+1 query problem: despite two explicit prompts (turns 19, 20, 22), the candidate never explained what causes N+1 at the Hibernate level — she jumped straight to a fix without demonstrating she understands the root cause. This is a fundamental Hibernate concept for any senior Java developer. - flatMap explanation was conceptually incorrect — described as 'one to many output' rather than the correct definition of flattening a stream of streams into a single stream. The code example was incoherent and would not compile (turn 5). - Communication quality is below Tech Lead standard — answers are consistently fragmented, grammatically broken, and difficult to parse. A Tech Lead must communicate design decisions clearly to junior developers, clients, and architects. - No system design signal was produced in the portion of the interview completed. Given the role requires designing microservice architectures and preparing technical flow diagrams, the absence of any demonstrated architectural reasoning is a significant gap. - Under drilling pressure the candidate deflected or gave incomplete answers (turns 19-22), suggesting surface-level familiarity with concepts rather than deep working knowledge — a risk for a role that requires peer code reviews and technical leadership. Study plan: - Hibernate N+1 problem: study how LAZY loading on @OneToMany collections triggers N+1 queries, how to identify it via Hibernate SQL logging and EXPLAIN ANALYZE, and how to fix it with JOIN FETCH, @EntityGraph, or batch fetching — this is a must-know for any senior Java developer. - Java Streams API: rebuild understanding of map vs flatMap with working code examples — flatMap flattens Stream> to Stream; practice with List> → Stream scenarios until the mental model is solid. - System design fundamentals: practice designing event-driven microservice architectures covering async messaging (Kafka/RabbitMQ), idempotency, outbox pattern for guaranteed delivery, rate limiting for external APIs, and security layers (TLS, field-level encryption for PII) — all directly relevant to the MahaRERA document verification design question. - Technical communication: practice explaining technical concepts in structured 3-part answers — what it is, why it matters, real example from your project. Record yourself and review for clarity before the next interview round. - PostgreSQL indexing and window functions: study RANK vs DENSE_RANK vs ROW_NUMBER with concrete examples, and practice index selection strategy (B-tree vs partial index, covering index) for high-volume tables — this was not reached but is on the JD. - Tech Lead soft skills: prepare STAR-format answers for production incident ownership, code review conflict resolution, and junior mentorship — the behavioral segment was not reached and these are weighted heavily for a lead role. --- TRANSCRIPT --- [0] INTERVIEWER: Give me a 60-second tour of your background and what you're working on right now. [1] CANDIDATE: okay hi I am Anjali Srivast I have around 8 years of experience as a Java developer and technology wise I work with the Spring Boot Spring MVC microservices and database sites such as Postgres S1, MySQL and I have also knowledge about CICT pipeline Docker and even architecture and I I started my career at 50 Hertz where I worked on a renewable energy forecasting platform that's where I got my functional expertise in Spring framework and enterprises for Java development and after I joined Infosys where I worked with Ford Motor Company. There is multiple parts like Svideo and DTO so I worked both the functionality. So basically we are working on the, like sometimes vehicle part is missing. So we are working on the vehicle delivery tracking and wind based lookup and online complaint management system. And from there, I joined PWC with probably my most significant project so far. So I have worked on Mahaneda, Maharashtra real estate project. So basically in that project, dealer and users need, dealer and need. So I worked on Mahaneda Maharashtra government real estate regulatory port. Good: Covered all four employers in roughly chronological order and named relevant technologies (Spring Boot, microservices, PostgreSQL, Docker, CI/CD). The MahaRERA mention as 'most significant project' showed some self-awareness. Missed: The answer ran well over 60 seconds, was disorganized, and lacked a clear narrative arc. No quantified impact or specific ownership was mentioned — it read as a resume recitation rather than a confident professional summary. Ideal: A strong answer would be structured as: current role + specific ownership → most impactful prior project → technology thread connecting them. It would include one concrete metric (e.g., '120 REST APIs', '10,000 daily uploads') and end with a clear statement of what the candidate is looking for next. [2] INTERVIEWER: What specifically did you personally own end to end at your current company — not the team, you? [3] CANDIDATE: so currently i work with capgev and i and working on hp print subscription platform okay i worked on the like in hp subscription project there is one module for pass printer as a service so basically in that scenario we can't buy the printer but we are taking in lease for as per the requirement like if you want to if you need a 30 pages for 50 pages so according to that pages you need to pay some amount so especially i worked on notification service and subscription service like we worked on microservice based architecture and there is multiple services so in notification service firstly data like any customer will become so using of the API gateway first, like first request will come for the notification service and notification service synchronously call for subscription service. So in that part I worked and in subscription service we are synchronously call for user service and notification service. Once we are checking internal Good: Correctly identified the HP Print Subscription platform at Capgemini and named two specific services owned: Notification Service and Subscription Service. Mentioned API gateway as the entry point. Missed: The description of inter-service communication was confused — described notification service calling subscription service 'synchronously' but then described subscription service also calling notification service, creating a circular dependency that was never resolved. No specific technical decisions or personal ownership artifacts were named. Ideal: A strong answer would clearly state: 'I owned the Notification Service end to end — I designed the async event consumption from Kafka, implemented retry logic with exponential backoff, wrote the JUnit tests, and handled the on-call escalations.' Specific design decisions and measurable outcomes (e.g., latency, reliability SLA) would demonstrate real ownership. [4] INTERVIEWER: Walk me through how Stream's flatMap differs from map — give me a code example from memory. [5] CANDIDATE: okay so firstly I will explain for map then I will explain for flat map so map is like if you are passing some input so it will return the same output and if you are using for flat map so if you are giving some input so it is giving in the like nested output like one too many output so the difference is like map is written for one to one output and flat map is written if you are passing one so it will return for the many so in the code wise like if we are using for a map so one string become one integer so we like this list dot stream functions and map and we will pass for string length like six and five example and flat map like if you are taking list.stream.flatmap as stream api we will use for the stream function and array.stream and we will pass for s split like hello world or java like that. Good: Correctly identified that map is one-to-one and flatMap handles nested/multiple outputs. Attempted a code example from memory, which shows willingness to engage. Missed: The core definition of flatMap was wrong — it is not about 'one to many output' but about flattening a Stream> into a Stream. The code example was incoherent and would not compile as described. No mention of the key use case: when each element maps to a collection and you want a single flat stream. Ideal: A strong answer: 'map transforms each element one-to-one; flatMap transforms each element into a stream and then flattens all those streams into one. Example: List> nested = ...; nested.stream().flatMap(Collection::stream).collect(toList()) — this flattens the nested lists. With map you'd get Stream>, which is rarely what you want.' [6] INTERVIEWER: What's the difference between orElse and orElseGet in Optional — when does it actually matter performance-wise? [7] CANDIDATE: So, Java 8 is introduced for optional. So, optional is... Optional or else get. Both returns are default values when optional is empty. The difference is when they evaluate the default or else always evaluate the argument even if the optional has a value or else get function is lazy. only evaluates if the optionals is actually empty db call is always happen like optional dot of users or else fetch from database so db call is always happen and if we will use for get so optional dot of users or else get fetch from database so db call only if empty Good: Correctly stated that orElse always evaluates its argument even when the Optional has a value, while orElseGet is lazy and only evaluates when the Optional is empty. Gave a concrete DB-call example to illustrate the performance implication. Missed: Did not mention the method signature difference (orElse takes a T, orElseGet takes a Supplier), which is the mechanism that enables lazy evaluation. Could have been more precise about the cost: orElse with a DB call will hit the database on every invocation regardless of cache hit. Ideal: Strong answer: 'orElse(T other) evaluates other eagerly — the expression is computed before Optional checks if it's empty. orElseGet(Supplier supplier) takes a lambda that is only invoked if the Optional is empty. Performance matters when the default value is expensive to compute — a DB call, a network request, or object construction. Always prefer orElseGet for non-trivial defaults.' [8] INTERVIEWER: Explain how @Transactional propagation works — specifically REQUIRED vs REQUIRES_NEW. [9] CANDIDATE: So, add-red transactional is basically using of database side. Like, whenever we are performing some database operation, like rollback or something, so we are using for add-red transactional. So, second thing you ask for required and required new. So, required is the default if the transaction already exists. So, join it. If not, create a new one. So, if a service A calls service B and both are required, they share the same transaction. If B fails, everything rolls back including A's work. And required new, always create a brand new transaction regardless of whether one exists. The existing transaction is suspended until the new one completes. If B fails, only B's transaction rolls back. A's work is unaffected. Like in real example of Mahara, when an agent application was approved, the main transaction updated the application status. Audit login and use required new. So the audit records was always saved even if the main transactions rolled back due to some downstream failure. Good: Correctly explained REQUIRED (join existing or create new) and REQUIRES_NEW (always create new, suspend existing). Correctly described rollback isolation — B fails, only B rolls back. Gave a plausible MahaRERA audit-log scenario. Missed: Did not mention that REQUIRES_NEW requires a separate database connection from the pool, which has resource implications. Did not mention NESTED propagation as an alternative. The MahaRERA example was slightly imprecise — audit logging in REQUIRES_NEW means the audit record persists even if the outer transaction rolls back, which is the correct intent but was not stated explicitly. Ideal: A complete answer would add: REQUIRES_NEW suspends the outer transaction and opens a new connection — this has connection pool cost implications at scale. Also worth mentioning: the inner transaction commits independently, so if the outer rolls back, the inner's changes are already committed and visible. This is exactly why audit logs use it — you want the audit record even on failure. [10] INTERVIEWER: What happens if you call a @Transactional method from within the same class? [11] CANDIDATE: so if we are using add direct transitional within the same class so like just example we have a class like a demo class and if we are like when you call for spring bean from like outside or the call goes through the like proxy which wrap with the transaction but to like when you call the method within the same class you will calling by this function like that is answer like we it is not possible like the transactions won't apply it silently does nothing Good: Correctly identified the core issue: calling a @Transactional method within the same class uses 'this' reference, bypassing the Spring proxy, so the transaction annotation is silently ignored. Missed: Did not name the solution: extract the method into a separate Spring bean, or use self-injection (ApplicationContext.getBean or @Autowired self-reference), or use AspectJ weaving instead of proxy-based AOP. Did not mention that this is a JDK dynamic proxy or CGLIB proxy limitation. Ideal: Strong answer: 'Spring implements @Transactional via a proxy — when you call a method from outside the class, the call goes through the proxy which wraps it in a transaction. Internal calls use the raw this reference, bypassing the proxy entirely. Fix: extract the method to a separate @Service bean so the call goes through the proxy, or enable AspectJ compile-time weaving which instruments the bytecode directly and doesn't rely on proxies.' [12] INTERVIEWER: Where specifically did you use REQUIRES_NEW in MahaRERA or Capgemini, and why — give me the exact scenario at the code level. [13] CANDIDATE: Sure. In Mahadera, Capgemini we used for request review. Like, so in Mahadera, like when we are working on agent application, like we are, it is in change state so say from submitted to approved so two things needed like to happen update the application and status and write an audit log entry so in that scenario we use required new because if the main transactions failed for any reason so a validations error and downstream service failure so the audit record still needed to be saved so you can't lose the audit really of the government compliance platform Good: Gave a concrete scenario: agent application state transition (submitted → approved), two operations (status update + audit log), and correctly justified REQUIRES_NEW because audit records must survive outer transaction rollback in a government compliance context. Missed: Did not describe the actual code structure — which class/method had @Transactional(propagation = REQUIRES_NEW), what the service call chain looked like, or what the entity/repository involved was. 'Code level' was asked and not delivered. Ideal: A strong code-level answer: 'In AgentApplicationService.approveApplication() annotated @Transactional(propagation=REQUIRED), I called auditService.logStatusChange() which was in a separate @Service bean annotated @Transactional(propagation=REQUIRES_NEW). This ensured the audit INSERT committed immediately in its own transaction. If the outer transaction rolled back due to a downstream payment gateway failure, the audit record was already committed and visible in the audit table.' [14] INTERVIEWER: Two threads simultaneously call put() on a HashMap — what can go wrong and what are your options? [15] CANDIDATE: or you are also wrong yeah so if two threads simultaneously calls put function so in that due in that time collision will be happen like because hash map is working on key value pair internally it is working on the link list and in java it is introduced for red black free and free file architecture so in that time like java 7 it is showing in finite loop like during resize like two threads simultaneously uh rehashing can create a circular circular link list like any subsequent get function on that like key loops forever hanging in the thread permanent so we will use like there there is three options to fix. Like there is three or more options like collection dot, synchronized map, concurrent hash map and retrain. Good: Correctly identified the infinite loop / circular linked list risk during concurrent rehashing in Java 7. Named three valid remedies: Collections.synchronizedMap, ConcurrentHashMap, and ReentrantLock. Showed awareness of the internal linked list / red-black tree structure. Missed: Did not explain data corruption risk beyond infinite loop — e.g., lost updates where one thread's put silently overwrites another's. Did not explain why synchronizedMap is a poor choice for high-concurrency (single lock on entire map). Did not distinguish when to use each option. Ideal: Strong answer: 'Two threads calling put() simultaneously can cause lost updates — one thread's entry silently disappears. In Java 7, concurrent rehashing could create a circular linked list causing an infinite loop on get(). Options: synchronizedMap wraps every operation in a single lock — correct but a bottleneck. ConcurrentHashMap uses CAS for empty buckets and synchronized on the bucket head for collisions — much better throughput. ReentrantReadWriteLock is appropriate when reads vastly outnumber writes and you need custom locking granularity.' [16] INTERVIEWER: What specific mechanism does ConcurrentHashMap use internally for writes — is it synchronized blocks, CAS, or locks? [17] CANDIDATE: so when we are using for concurrent hash maps internally like in Java 7 they are working on the segment logging and the segment locking is nothing like we are extend for the return block and like if we are working multiple threads so if you are working on multiple segments so they have own lock so in Java 7 we are working on that but in Java 8 is in produce for CAs like compare and swap atomic atomically checks so basically like uh here we are working on not working on the segment but it is working on bucket system so like simultaneously uh like if we are writing yeah it is working it is uh uh like a design for the multi-threaded environment it is a threat threat safe so we are working that's why we are using concurrent hash map so like if the bucket is null and insert the new node in the single hardware level so in that time like no lock is needed so if you are working first thread or second thread so in that duration you can create a concurrent new concurrent object and then you can lock that Good: Correctly identified the Java 7 segment locking model and the Java 8 shift to CAS for empty bucket insertion and synchronized on the bucket head for collision. Showed genuine knowledge of the internal evolution. Missed: Explanation was disorganized and hard to follow. Did not clearly state that in Java 8, CAS is used only when the bucket is null (first insertion), and synchronized(bucket_head) is used when there is a collision — these are two distinct mechanisms. Did not mention the load factor or tree conversion threshold (8 nodes → red-black tree). Ideal: Strong answer: 'Java 8 ConcurrentHashMap uses two mechanisms: for an empty bucket, it uses a CAS operation — compareAndSet(null, newNode) — which is lock-free and very fast. For a non-empty bucket (collision), it synchronizes on the head node of that bucket only, so contention is limited to that single bucket. This is far more granular than Java 7's 16-segment lock. At 8 entries per bucket it converts the linked list to a red-black tree for O(log n) lookup.' [18] INTERVIEWER: Explain the N+1 query problem — how did you identify it and fix it on MahaRERA? [19] CANDIDATE: so in n plus query problem like if we are writing for hibernate sql login in the sit environment so you have name was n plus query like post was sql number for the pixel show [20] INTERVIEWER: Take your time — explain what the N+1 problem actually is first, then tell me how you fixed it. [21] CANDIDATE: We also confirm the fix using Postgres SQL explain analysis. Before the fix it shows n-sequential scans on the documents table after it shows a single join operation. Good: Mentioned using EXPLAIN ANALYZE in PostgreSQL to verify the fix, and described the before/after: N sequential scans on the documents table vs a single join operation. This is a valid verification technique. Missed: The interviewer explicitly asked for the explanation of what N+1 is first — the candidate skipped this entirely for the second time. There is no evidence the candidate can explain the root cause at the Hibernate level. Ideal: The answer needed to start with: 'N+1 occurs when Hibernate fetches a parent entity, then issues a separate SELECT for each child collection — 1 query for N agents, then N queries for their documents. Root cause: LAZY loading on @OneToMany without a JOIN FETCH. Fix: add @EntityGraph or use JOIN FETCH in JPQL — SELECT a FROM Agent a JOIN FETCH a.documents — which collapses it to a single query. I confirmed the fix with EXPLAIN ANALYZE showing one join vs N sequential scans.' [22] INTERVIEWER: You skipped the explanation of what N+1 actually is — tell me what causes it at the Hibernate level, then tell me the specific fix you used.