TextKernel explores scaling and affordability of Generative AI for CV parsing

At a glance

TextKernel is a global leader in providing artificial intelligence technology solutions to over 2,500 corporate and staffing organisations worldwide. 

Challenge

TextKernel turned to Amazon Bedrock to address cost and scalability challenges in high-volume CV parsing.

Solution

Firemind used Amazon Bedrock to build a chain-of-thought process, prompting TextKernel to extract CV data into key fields stored in DynamoDB.

Services used
  • Amazon S3
  • Amazon Bedrock
  • AWS Lambda
  • Amazon DynamoDB
Outcomes
  • 40% cost reduction when using Amazon Bedrock
  • 2x the speed compared to OpenAI models.
Business challenges

Enhancing CV parsing at scale

TextKernel sought a solution to a scalable and cost-effective use of Generative AI for their CV parsing process. The primary objective was to explore how the advanced language models available on the Amazon Bedrock platform could be harnessed to enhance their CV parsing capabilities, while also delivering substantial cost savings and scalability benefits in comparison to OpenAI.

By partnering with Firemind to develop a proof-of-concept solution on Amazon Bedrock, TextKernel aimed to unlock the potential of a cost-effective use of generative AI in their core CV parsing operations. 

What our customers say

Hear directly from those who’ve experienced our services. Discover how we’ve made a difference for our clients.

Solution

Harnessing the Power of Generative AI on AWS

To address TextKernel’s challenge, Firemind proposed a solution that leveraged the capabilities of Amazon Bedrock, a fully managed service that offers a choice of high-performing large language models. The team developed a defined chain-of-thought process that used prompts provided by TextKernel to extract key information from CVs, which was then stored in an Amazon DynamoDB database. This approach aimed to streamline the prompt engineering logic and validate the scaling and cost of generative AI for enhancing TextKernel‘s CV parsing capabilities.

The solution utilised other AWS services, such as Amazon S3 for data ingestion, AWS Lambda for data processing, and AWS Step Functions for orchestration. By harnessing the power of these AWS technologies, Firemind sought to create a scalable and flexible system that could handle the high volume of CVs TextKernel processes.

Scalability and cost effectiveness

The most affordable of the performant Amazon Bedrock models produced results with 12% to 57% savings compared to the benchmark cost set for the project, yielding potential significant cost savings over OpenAI. The most performant LLM achieved an average max time of 3.6 seconds, which is 55% of the total time of the 6.5 second benchmark (almost twice as fast as OpenAI).

Model Spotlight

Anthropic is an artificial intelligence research company based in the San Francisco Bay Area. Founded in 2021, the company focuses on developing safe and ethical AI systems, particularly AI assistants capable of open-ended dialogue and a wide range of tasks. 

Anthropic has created notable models like Claude, and explores techniques such as ‘constitutional AI’ to imbue their AI with robust ethical principles. Led by a team of prominent AI researchers, Anthropic is positioning itself as an emerging leader in the field of beneficial AI development, working to ensure AI capabilities advance in alignment with human values.

Claude 3 Haiku

We chose Claude 3 Haiku for this project due to its exceptional performance in text extraction and rapid processing times. Haiku was the ideal fit for handling large volumes of CVs because it offered the lowest latency among the Claude 3 models, processing tasks in half the time compared to other models.

Its minimal token usage further enhanced cost efficiency, allowing TextKernel to maintain a high level of precision in resume parsing without inflating operational costs.

Haiku’s strength in delivering consistent output formats, particularly JSON, ensured that the data extraction was both reliable and fast, meeting TextKernel’s needs for scalability and affordability.

Get in touch

Want to learn more?

Seen a specific case study or insight and want to learn more? Or thinking about your next project? Drop us a message below!