Interview — Senior Java Backend Engineer - mock 2 Difficulty: hard Started: 2026-04-28T08:03:12.287Z Ended: 2026-04-28T08:15:55.350Z Score: 5.2/10 (no-hire) Strengths: - Correctly described HashMap treeification threshold as 8 entries without being told the answer first (turn 7), showing genuine familiarity with JDK source-level details. - Accurately explained Java 7 infinite-loop rehashing risk and Java 8 lost-update risk in ConcurrentHashMap context (turn 9), demonstrating awareness of version-specific concurrency hazards. - Correctly identified CAS for empty buckets and synchronized-on-head-node for non-empty buckets as the dual mechanism in Java 8 ConcurrentHashMap (turn 13), which is a non-trivial detail. - Spring Boot auto-configuration explanation (turn 15) correctly named spring.factories, conditional annotations (@ConditionalOnClass, @ConditionalOnMissingBean), and the idea that beans are not blindly created. - Breadth of project exposure across energy forecasting, automotive diagnostics, and real estate regulation shows adaptability across domains relevant to enterprise Java work. Weaknesses: - Answers require repeated prompting to reach specifics — the treeify threshold, CAS mechanism, and conditional annotation names all came only after follow-up questions, not proactively. - No system design, SQL, or JPA/transaction questions were reached, leaving the majority of the JD's required competencies completely unassessed and undemonstrated. - Communication quality is significantly below senior level — fragmented sentences, filler-heavy delivery, and imprecise phrasing ('it is already synchronized', 'we will create a object') undermine confidence in production-level technical discussions. - Resume quality is poor: missing years on job dates, no mention of JUnit or Swagger despite JD requirements, and the career objective section is generic boilerplate — raises questions about attention to detail. - Spring Boot answer missed the AutoConfigurationImportSelector / ImportSelector mechanism and did not explain how @SpringBootApplication composes @EnableAutoConfiguration, suggesting surface-level rather than internals-level understanding. Study plan: - Spring Data JPA internals: study @Transactional propagation levels (REQUIRED vs REQUIRES_NEW), lazy loading pitfalls, and the N+1 query problem with solutions (@EntityGraph, JOIN FETCH) — directly tested in JD-aligned interviews. - PostgreSQL query optimization: practice EXPLAIN ANALYZE output reading, understand B-tree vs GIN vs GiST index types, and know when partial indexes help — the JD explicitly calls out PostgreSQL SQL writing. - System design fundamentals: practice designing a rate-limited REST API or an event-driven notification service end-to-end, articulating trade-offs (consistency vs availability, sync vs async) — this segment was never reached. - Spring Boot internals: study AutoConfigurationImportSelector, the full @SpringBootApplication composition, and how to write a custom auto-configuration with @ConditionalOnProperty — deepen beyond spring.factories awareness. - Concurrency patterns: study ReentrantLock vs synchronized, CompletableFuture chaining, and common thread-pool sizing strategies — current knowledge is correct but shallow and needs to be volunteered without prompting. - Communication and interview delivery: practice the STAR format for technical answers — lead with the mechanism/decision, then give the example, then state the trade-off. Record yourself and eliminate filler phrases. --- TRANSCRIPT --- [0] INTERVIEWER: Give me a 60-second tour of your background — where you started, what you're doing now, and what kind of work you've been closest to technically. [1] CANDIDATE: Hi, my name is Anjali. I have around 8 years of experience as a Java backend developer. And technology wise, I work with Spring Boot, Spring MVC, microservices, database sites such as PostgreSQL, MySQL. And I also work with CI CD pipelines, Splunk, Sonar and Event Event Architecture. so i joined 50 hertz in 2019 that was my first company and i worked with wind and solar project so basically we are we are working on the forecasting system and we are working on algorithms and after that i joined infosys and infosys we worked i worked on four projects so there is multiple module for dto and s video so basically we are working on the wind number and we are tracking the data like sometime car parts is missing so we are maintaining all the data so we are working on that part and after that i joined pwc and i worked for maharasht real estate regulatory project so like dealers and agents like agents and customers need to submit all the the information in that application. And we also previously, they are working on struts, but we converted it into the Springboard project. So we used microservice based architecture in database side, we used PostgreSQL and we are maintaining all the information. Good: Gave a coherent chronological walkthrough covering three employers, named relevant technologies (Spring Boot, PostgreSQL, microservices, CI/CD, Splunk), and briefly described each project domain. Missed: The intro was too long and unfocused — 'we are working on that part' and similar phrases added no signal. No mention of team scope, ownership level, or a specific technical achievement that would differentiate a senior candidate. Ideal: A strong 60-second intro names employer, role scope, and one concrete technical highlight per role (e.g., 'at PwC I led the migration of a Struts monolith to Spring Boot microservices, reducing deployment time by X'). It closes with a one-line statement of current technical focus and what the candidate wants next. [2] INTERVIEWER: Let's get into the technical details. In Java, explain how HashMap works internally — specifically what happens when two keys have the same hash code. [3] CANDIDATE: so in hash map is working on the key value pair and hash map internally working on like it is using for equal method it is working on bucket system it is saving in the hash code value so hash map is internally uses for array of buckets like when you like it is working on the key value pair so java calls hash code on the key applies the secondary hash functions to distribute this more evenly so we have a like buckets like we have a 10 number of elements so when two keys produce the same hash code which is a collision both entries go to the same bucket so the bucket store entries as a link list so the second entry is simply changed onto the first using the next pointer when you call get methods with the key java finds the bucket using the hash then walks the link list calling the equals and each entry until it's find the matching keys Good: Correctly described the bucket array structure, hash code computation, collision handling via linked list chaining, and the equals-based key lookup during get — the core mechanics are understood. Missed: Did not mention the treeification behavior at all (volunteered only after being asked), did not mention the load factor or resize trigger, and did not mention that hashCode() is further mixed/spread by an internal hash function to reduce clustering. Ideal: A strong answer covers: array of Node[] buckets sized as power-of-2, index = (n-1) & hash(key), collision → linked list chaining, treeification at threshold 8 (with MIN_TREEIFY_CAPACITY=64), load factor 0.75 triggering resize/rehash, and why equals() must be consistent with hashCode(). Mentioning the secondary hash spread (key.hashCode() ^ key.hashCode()>>>16) would signal source-level familiarity. [4] INTERVIEWER: What happens to that linked list in a bucket when it grows very large, and at what point does Java change the data structure used to store entries in that bucket? [5] CANDIDATE: list bucket is it will be a very large so in that case java it is introduced for red black tree internally so red black tree is nothing like balestries so like also like the additional condition for less well known the conversion only happens if the overall hash map has a let at least 64 buckets if the total capacity is before 64 but java instead resize and re reassess the entire map rather than tree find that bucket so the idea being that a small map size resizing it's cheaper than maintaining tree structure overhead so internally we will use for red black trees Good: Correctly identified the red-black tree conversion and accurately stated the MIN_TREEIFY_CAPACITY condition of 64 total buckets — this is a non-obvious detail that shows real source familiarity. Missed: Did not state the per-bucket threshold of 8 (said 'very large' instead), and did not mention the untreeify threshold of 6 when entries are removed, which would show deeper understanding of the hysteresis design. Ideal: A complete answer states: TREEIFY_THRESHOLD=8 entries in one bucket triggers conversion to TreeNode (red-black tree), but only if total capacity >= MIN_TREEIFY_CAPACITY=64; otherwise the map resizes instead. Also worth noting: UNTREEIFY_THRESHOLD=6 converts back to linked list on removal to avoid thrashing. [6] INTERVIEWER: What's the threshold — the specific number of entries in a single bucket — that triggers the conversion from linked list to red black tree? [7] CANDIDATE: so specific number of eight so like eight entries that's the three five threshold constant in hash map source code Good: Correctly stated the threshold is 8, which is accurate (TREEIFY_THRESHOLD = 8 in HashMap source). Missed: Said 'eight entries that's the three five threshold constant' — the phrasing 'three five' is garbled and unclear. Did not name the constant TREEIFY_THRESHOLD or its companion UNTREEIFY_THRESHOLD=6. Ideal: The ideal answer names the static final constant TREEIFY_THRESHOLD = 8 in java.util.HashMap, notes it was chosen empirically to balance memory overhead of TreeNode (roughly 2x Node size) against O(log n) vs O(n) lookup benefit, and mentions UNTREEIFY_THRESHOLD = 6 as the hysteresis guard on removal. [8] INTERVIEWER: The threshold is actually 8, you're correct. Now let's move to concurrency — if two threads simultaneously call `put` on a regular HashMap, what can go wrong, and what are your options to handle it safely? [9] CANDIDATE: So in concurrent HashMap is thread safe and if we are using for GET input so it is also by default thread safe. In normal HashMap can be used. So the most dangerous is infinite loop in Java 7 during rehashing when the map resizes two threads simultaneously. So, resizing can create a circular reference in the linked list, get function on that key, looks forever hanging the thread permanently. In Java 8, its circular reference issue was fixed, but you can still get lost update, two threads riding in the same bucket simultaneously. So can cause one entry to silently overwrite the other, so data is just lost without an exception. Good: Correctly identified the Java 7 infinite-loop / circular reference risk during concurrent resize and the Java 8 silent lost-update risk — both are accurate and non-trivial points. Missed: Did not name the three safe alternatives clearly: Collections.synchronizedMap (coarse lock), ConcurrentHashMap (fine-grained), or explicit external synchronization. Also did not mention that HashMap gives no memory visibility guarantees, so reads can see stale data even without a write conflict. Ideal: A strong answer names the failure modes (Java 7: infinite loop from circular linked list during resize; Java 8: lost updates, stale reads due to no happens-before guarantee), then lists the three remedies: (1) ConcurrentHashMap for high-concurrency read/write, (2) Collections.synchronizedMap for simple cases, (3) external locking if atomic compound operations are needed. Mentioning that even reads on HashMap are unsafe without synchronization due to memory model would be senior-level. [10] INTERVIEWER: Good. What specific mechanism does ConcurrentHashMap use internally to allow concurrent reads and writes without locking the entire map? [11] CANDIDATE: concurrent hash map concurrent hash map internally it is working on the on the multi-threaded environment it is designing for the multi-threaded environment it is a thread safe and synchronized so when we are writing uh reading and writing the some thread so it is like locking for that part not entire con hash map so when we are the if we want to lock so we are using for conquer we will create a concurrent hash map object so in previously in java 7 we are working on the segment system so we are using for return clock like we are extending from return clock but in java 8 we are not working on the segment we are working on the bucket system so when we are writing or reading something so we will create a object and there is some functions some methods like get get method so it is all already thread safe but some atomic method is also available in confront hash map so put if absent functions is available so you can use it because it is a thread save Good: Eventually mentioned the Java 7 segment-based approach and noted that Java 8 moved to bucket-level locking, which is directionally correct. Missed: The answer was circular and did not answer the question — 'we can't need synchronization because it is already synchronized' is not an explanation of the mechanism. CAS was not mentioned at all in this turn. Ideal: A direct answer: Java 8 ConcurrentHashMap uses CAS (compare-and-swap via Unsafe.compareAndSwapObject) to insert into an empty bucket — no lock acquired. For non-empty buckets it uses synchronized on the head node of the bucket (not a ReentrantLock, not a segment). Reads are lock-free because Node.val and Node.next are declared volatile. This gives O(1) amortized reads with minimal write contention bounded to individual bucket heads. [12] INTERVIEWER: What specific low-level mechanism does Java 8's ConcurrentHashMap use to perform a write on a bucket — is it synchronized, a lock, or something else, and how does it minimize contention? [13] CANDIDATE: using concurrent hash map we can't need a synchronization because if we are using for synchronization it is locked entire map so when we are creating the object of concurrent hash map it is already synchronized so if the bucket already has entries synchronized has on the head node of the bucket especially not a retrant lock not a segment lock just a synchronization block using the first node in the bucket as the monitor object. So the mechanism is actually a combination of both CAs of the unconstant empty bucket case, which is the fastest path and synchronized on the bucket head node for the content case where entries already exist. I will tell you the truth. Good: Correctly articulated the CAS path for empty buckets and synchronized-on-head-node for non-empty buckets, and correctly distinguished this from ReentrantLock and segment locks — the most precise answer in the interview. Missed: Did not mention that Node fields (val, next) are volatile, which is what makes lock-free reads safe. Also did not mention that the synchronized block is on the first node object itself (not a separate lock object), which is an interesting design choice worth noting. Ideal: Complete answer adds: Node.val and Node.next are volatile, enabling fully lock-free reads with happens-before guarantees. The synchronized monitor is the head Node object itself (not a dedicated lock), keeping memory overhead minimal. For tree bins, the monitor is the TreeBin wrapper object. This design means read throughput scales with CPU count while write contention is bounded to individual bucket heads. [14] INTERVIEWER: That's accurate — CAS for empty buckets, synchronized on the head node otherwise. Let's move to Spring Boot. Explain how Spring Boot's auto-configuration works — what triggers it and how does it decide which beans to create. [15] CANDIDATE: in spring boot there is one annotation is a spring boot application annotation so that is a part of three annotation so one is auto configuration so when we are using android auto configuration annotation like enable auto configuration annotation so the mechanism work through the file called spring factories spring dot factories located inside every spring boot starter jar for For example, Springboot autoconfiguration.jar, this file contains a list of autoconfiguration classes mapped under the key enable autoconfiguration and Springboot loads all these classes at startup. So, but it doesn't blindly create all those bins. Each autoconfiguration class is annotated with conditional annotations, control whether it activates. The most common one is at the red conditional class only configured that is a specific class is on the class path and the red conditional missing bean only creates a bean if the developer hasn't already defined one themselves and conditional properties are not defined. Good: Correctly described the spring.factories file location, the EnableAutoConfiguration key, conditional annotations (@ConditionalOnClass, @ConditionalOnMissingBean), and the principle that auto-configuration yields to user-defined beans. Missed: Did not name AutoConfigurationImportSelector as the class that reads spring.factories and feeds class names to the context. Did not mention that in Spring Boot 2.7+ the mechanism moved to META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports. Did not explain the @Order or @AutoConfigureAfter/@AutoConfigureBefore ordering mechanism. Ideal: A senior answer explains: @SpringBootApplication includes @EnableAutoConfiguration, which imports AutoConfigurationImportSelector; this class reads META-INF/spring.factories (Boot <2.7) or META-INF/spring/...AutoConfiguration.imports (Boot >=2.7) to get candidate class names. Each candidate class uses @Conditional* annotations to gate bean creation. @AutoConfigureAfter/@AutoConfigureBefore control ordering. A developer overrides any auto-configured bean simply by declaring their own @Bean of the same type, triggering @ConditionalOnMissingBean to skip the auto-configured one. [16] INTERVIEWER: You showed solid breadth across HashMap internals, ConcurrentHashMap, and Spring Boot auto-configuration, but answers often needed prompting for specifics — work on leading with precise details like thresholds, class names, and mechanisms without waiting to be asked.