ResumeXP

An AI-powered application to help users maximize their resume potential in seconds.

Check it out

Jan 29, 2026 — Feb 3, 2026

Source CodeProject PageLive Demo

Tech Stack

Development Blog Posts

Curiosity

January 29, 2026 - 7:12PM

I realized I have an AI API key lying around and I've been thinking about applications that could leverage AI to provide real value to users. There are a few resume analyzer applications out there that I have used, but I'm really interested in how it actually works. I suspect it's simpler than it sounds, so I want to have a go at it.

Right off the bat, I already see how the architecture of this project would look like:

  • User uploads their resume, clicks a button to analyze it
  • The file is sent to the file parsing API
  • The API extracts the text from the resume and sends it to the AI analysis API
  • Using careful prompt engineering, the AI analyzes the resume and provides suggestions for improvement
  • The suggestions are sent back to the frontend and displayed to the user

Looking ahead, I presume the main challenge would be making sure the resume passes the Applicant Tracking System (ATS). From what I've read, many companies use ATS to filter resumes before a human even sees them. So making sure the resume is optimized for ATS would be crucial. I'll have to do some research on how ATS systems work and what they look for in resumes.

That's all I had in mind for today, I'll continue working on this in my free time. Also, it felt like January flew by really quickly.. just me?

— Montasir

Pipeline

January 31, 2026 - 1:57AM

I spent a day finishing the MVP for ResumeXP. I finally have the full flow working from uploading a resume, to parsing the text, to sending it through the AI pipeline and then visualizing the results in a clean UI. Once I started working on it, I could not stop until it was done. It felt exhilarating to see everything come together.

The upload and parsing flow was pretty straightforward. Users can drop in a PDF, DOCX or TXT file and the analysis service would handle it. What I used for each file type:

  • 'react-pdftotext' library for PDF files, which takes the file object directly and returns the extracted text
  • 'mammoth' library for DOCX files, by converting the file into to an ArrayBuffer, pass it to Mammoth itself, and grab the value field from the result
  • TXT files or other plain texts, you can just call file.text() which is convenient
Image 1

Why did I parse the files on the client instead of the server? I chose client side parsing because the text extraction requirements were straightforward and supported by lightweight browser libraries like above. Parsing on the client reduced backend complexity and made the upload experience feel more responsive. If the parsing logic became more complex or needed stricter validation, the server is always there.

Once the text is parsed and ready, it's sent to the AI pipeline. This part was straightforward, and having some experience from RamAI doing something similiar helped a lot. I built a dedicated API route that handles the entire process of:

  • Validating the input
  • Constructing a strict system prompt
  • Sending the request to the AI model
  • Cleaning up the AI's response

I designed the analysis pipeline so the AI is constrained to return a strictly defined JSON structure. Returning unstructured text would make it difficult to reliably render results on the frontend, so a structured response was necessary. By enforcing a schema, the frontend can confidently parse and visualize sections like ratings, strengths, weaknesses, and suggestions without hesitation.

To make this robust, I added a JSON extraction and validation step. If the model includes extra text outside the expected structure, the parser strips it out and keeps only the valid JSON. I also added defensive error handling for missing or malformed fields, ensuring the UI never renders partial or inconsistent data.

On the frontend, this structured output made it straightforward to design a clean, readable interface that visualizes the analysis clearly. With the MVP complete, the focus now shifts to refining prompt quality, tightening formatting, and improving ATS checks to make the overall experience more polished.

Image 1

7.8 out of 10? Guess I know what I'm fixing next.. :p

— Montasir

Flow

February, 3, 2026 - 12:08PM

I've been working on wrapping ResumeXP for the past couple of days. The few components I've been finishing up are:

  • Job matching based on the user's job description input
  • Past analysis history for authenticated users
  • Cover letter generation

There are definitely more features I could add, but I want to keep the app light for now. I try to keep these projects small and organized so I can actually finish them and so they're easier to revisit in the future. That's the same approach I took with RamAI and UniWeek. I can always expand them later if I feel the need.

Beyond the technical work, I'm also practicing my flow: ideation, planning, building the MVP, then refining and polishing it. Each project teaches me something new about how to approach the next one, and that's been the most valuable part of this whole process.

Image 1

Visit the site here:

https://resumexp.vercel.app

View the source code here:

https://github.com/montasirmoyen/resumexp

— Montasir

Montasir Moyen - Full-Stack Software Developer & Engineer in Boston