# Interview — Senior Java Backend Engineer - Anjali mock 1

- **Difficulty**: hard
- **Started**: 2026-04-28T07:24:42.301Z
- **Ended**: 2026-04-28T07:39:30.377Z
- **Target**: 10 questions / 20 min

## Score: 5.1 / 10 — _no-hire_

### Dimensions
- Technical depth: 5/10 — Demonstrated reasonable knowledge of JWT-based auth with Spring Security @PreAuthorize at service layer, OTP rate limiting (3 attempts/5 min window), and the three-state idempotency model (processing/sent/failed) in turn 13. The OTP flow in turn 3 was coherent and showed real-world reasoning about Indian government user behavior.
- System design: 5/10 — Correctly identified Kafka as the decoupling mechanism for a notification service, mentioned idempotency via event IDs, and in turn 13 described a three-state status machine (processing/sent/failed) to handle the crash-after-mark-before-send gap. Also mentioned circuit breaker via Resilience4j.
- Communication: 4/10 — Turn 3 was the clearest answer — structured, logical, and showed genuine reasoning. However, most other answers were fragmented, repetitive, and grammatically difficult to follow (turns 5, 7, 11, 13). The candidate frequently used filler phrases and incomplete sentences under pressure.
- Role fit: 5/10 — Resume and intro cover Spring Boot, microservices, PostgreSQL, REST APIs, Jenkins CI/CD, and Azure — all JD requirements. PwC MahaRERA project is relevant and large-scale (34,000 daily users). Eight years of experience aligns with the 7+ year requirement.

### Strengths
- OTP authentication design (turn 3): gave a coherent, user-context-aware rationale for choosing OTP over passwords on a government platform, and correctly described the JWT issuance flow with role claims.
- Spring Security at service layer (turn 3): correctly identified moving @PreAuthorize to the service layer after a real production incident where agents bypassed controller-level checks — this shows practical debugging experience.
- Three-state idempotency model (turn 13): unprompted, described processing/sent/failed states to handle the crash-before-send gap, which is the correct pattern for at-least-once delivery safety.
- Kafka for notification decoupling (turn 11): correctly identified event-driven architecture with a message queue to decouple producers from the notification service.
- Self-correction under pressure (turn 9): when directly challenged on Redis usage, the candidate admitted it was not implemented in production rather than doubling down — shows some intellectual honesty.

### Weaknesses
- Redis/blacklist confusion (turns 5-9): initially implied Redis was used in production for token blacklisting, then admitted it was only discussed — a credibility gap that emerged only under direct challenge.
- System design lacks depth (turn 11): Kafka mention was correct but the design had no discussion of consumer group offsets, dead-letter queues, retry policies, or channel-specific failure handling for SMS vs email vs push.
- Circuit breaker misapplication (turn 13): invoked Resilience4j circuit breaker in response to a question about the idempotency gap after a crash — these are unrelated patterns; the candidate conflated them.
- No evidence of JUnit, Swagger, or Azure-specific services: the JD explicitly requires unit testing, Swagger documentation, and Azure cloud-native experience, none of which were demonstrated or probed with substance.
- Communication quality under technical pressure: answers in turns 5, 7, 11, and 13 were fragmented and hard to follow, which would be a liability in a senior engineer role requiring client-facing or cross-team collaboration at PwC or Capgemini level.

### Study plan
- [ ] JWT security deep dive: study token revocation strategies (Redis-based blacklist with TTL, short-lived access + refresh token rotation) and implement a working Spring Security filter chain with Redis integration.
- [ ] Distributed systems fundamentals: study at-least-once vs exactly-once delivery in Kafka, consumer group semantics, offset management, dead-letter queues, and idempotent consumers — use Confluent documentation and 'Designing Data-Intensive Applications' ch. 11.
- [ ] Resilience patterns: clearly separate circuit breaker (Resilience4j), retry with exponential backoff, and idempotency — understand which pattern solves which failure mode and practice explaining each with a concrete scenario.
- [ ] Java concurrency and JVM internals: study thread pools (ExecutorService), synchronized vs ReentrantLock, volatile, and garbage collection basics — these are common senior Java interview topics that were not covered.
- [ ] Swagger/OpenAPI and testing: practice generating Swagger docs from Spring Boot annotations and writing meaningful JUnit 5 + Mockito unit tests with edge cases — both are explicit JD requirements with zero evidence shown.
- [ ] Structured verbal communication: practice the STAR format for behavioral answers and the C4 model or whiteboard-style narration for system design — record yourself and review for clarity and completeness before the next interview.

## Transcript

**[0] INTERVIEWER**

Give me a 60-second tour of your background — walk me through your experience from 50Hertz to PwC and what you're working on right now.

**[1] CANDIDATE**

complex background problems whether they yeah sure you start growing hi my name is anjari shrivas and i am a senior java developer with around eight years of experience building back-end system using java spring boot and microservices i started my career at 50 hertz where i worked on a renewable energy forecasting platform that's where I got my foundational expertise in Spring Framework and Enterprise's Java development. After that, I moved to Infosys where I worked on a Ford Motor Company engagement. We were migrating a legacy dealer management system from Struts to Spring Boot. And I worked on a module like vehicle delivery tracking, wind-based lookup and online company management system. From there I joined PwC which was probably my most significant project so far. I worked on Maharashtra Real Estate Regulatory Portal. It's a large-scale public platform with over 34,000 daily users. I own the agent registration module and do it designing and developing over 120 REST APIs, building the authentication layer, integrating third-party services like Aadhaar verification payment gateways and also leading the data migration from the legacy offline system to the new digital platform currently i'm at capgemini working on hqprint subscription platform building microservice for subscription management and notification service using spring boot aws and docker

**[2] INTERVIEWER**

At PwC on the MahaRERA project, you mentioned building 120 REST APIs — walk me through how you designed the authentication layer specifically, what mechanism did you use and why?

**[3] CANDIDATE**

We used OTP based multi-factor authentication as the primary login mechanism. The reason we choose OTP over traditional password based login in the Mahalera is our government platform. Most agents accessing it are not highly technical users and OTP via mobile is something every Indian user is already familiar with from banking app and UPI. upi it also eliminated the risk of weak password on in compliance heavy platform the flow was straightforward agent enters their registered mobile number we generate a time-bound otp send via government sms gateway agent submits submits this otp validate is in server side issue and jwt token on success that jwt token then travels with every subsequent request in the the authorization header on the JWT side The token carried the user identity their role whether they were on agent official or admin and in expiry time stamps every service validated the token using string security filter chain before the request reached any business logic for role-based access we used spring security and direct pre-authorized annotation at the method level not just at the controller level specifically because we had a production incident early on where agents were directly calling official endpoints by knowing the urls moving the authorizations check to the service layer meant it was enforced regardless of how endpoint was reached we also added direct limiting on the otp generation endpoints and agent could only request a new otp three times within a five minutes window per mobile number And this prevented brute force attacks where someone could bombard the SMS gateway on try to guess OTPs systematically.

**[4] INTERVIEWER**

You mentioned JWT tokens carrying role information — how did you handle token revocation, specifically if an agent's account was suspended mid-session before their token expired?

**[5] CANDIDATE**

token we used jwt token and also we used for uh we are using for netflix jewel we are giving for the like role based authentication and we also like um when like agent account was suspended so we are added uh in the blacklist like every request spring securely like filter checked in the incoming JWT against the blacklist before allowing it through so like token was blacklist we returned 40 like 401 immediately regardless of like whether the token was still technically valid

**[6] INTERVIEWER**

Where did you store that blacklist — in memory, a database, or something like Redis — and what happened to that blacklist if the server restarted?

**[7] CANDIDATE**

So, also like the trade-off that every request was making a database call to check the blacklist which added latency for example like for Mahara-era traffic levels was acceptable but the better production approach would be Redis and in memory store we are using for TTL support so

**[8] INTERVIEWER**

You said "we are using" — were you actually using Redis in production on MahaRERA, or are you describing what you would do differently in hindsight?

**[9] CANDIDATE**

used in reddix so we used like also postdress as well for the blacklist in production not reddix was something was discussed as a team as a better long-term approach but it wasn't implemented during my time on the project i should have been clear about that but

**[10] INTERVIEWER**

Understood. Let's move to system design. You've worked on microservices across multiple projects — design a notification service for MahaRERA that needs to send SMS, email, and push notifications to 34,000 daily users, with guaranteed delivery and no duplicate sends. Walk me through your architecture.

**[11] CANDIDATE**

in mahalera we are used for synchronous and all synchronous call so in synchronously we are calling one api two second different api using of the rest call and when we are using for message system message queue system so we use for kafka so at the entry point any service that needs to send a notification so like say the applications workflow service when the agent status changes but we want to publish some event or message queue we want to send some email so uh like for notification service directly we can use in that condition like we that this decouple the sender from the notification service completely like the notification service can view from the queue each event has a unique event id before processing so we check idpotency table in the database because if we are like some uh sometimes we are sending multiple times of mail but it is showing it is working on the retired scenario so we also implemented for id potency because of we are saving in the database of the transaction id if transaction id is already present so we so again if we are sending so it will show like it is duplicate so we are handling these things as well

**[12] INTERVIEWER**

You mentioned an idempotency table in the database — if the notification service crashes after marking the event as processed in the idempotency table but before actually sending the SMS, how do you handle that gap?

**[13] CANDIDATE**

are implementing for idfmc and like if notification service are crashes so we are implemented for a circuit breaker design pattern resistance 4j so sometimes we are calling for one api like just example we have two microservices and a is internally calling for b microservices and it is failing again and again so in this scenario we are using for that pattern and we like the The fix is to change the order of operations like end-use status tracking rather than simple process flag. Instead of making complete before sending, we use three states like processing, send and fail. So, when we pick up an event, we mark its processing first, then we attempt delivery. Only on confirm delivery success from the SMS gateway do we update to send.

**[14] INTERVIEWER**

Thanks Anjali, we've covered your background, authentication design, token revocation, and notification service architecture — we'll be in touch on next steps.
