Applications

In this part we will build several applications integrating LLMs in different roles, from frontend chatbots that interact with the user to hidden in the backend. These chapters are best read in order, although they are mostly self-contained, and you can skip some of them if you aren’t interested in a particular type of application. In any case, whenever a chapter builds on top of a previous application, we will clearly state it in the corresponding introductory section.

Note

This part is under construction.

In Chapter 13  The Chatbot we build our first LLM-powered application: a basic chatbot. We will learn how to set up a conversation loop and how to keep track of previous messages to simulate short-term memory for the bot. We will also learn how to stream messages for a better user experience (simulating a typing animation instead of waiting for the full response).

This one is a must-read chapter, because it will lay the groundwork for the rest of the applications. It is also the only chapter where we’ll build the application step by step, showing how to go from zero to a functioning bot. In the rest of the chapters, we’ll start with a working application and digest it to understand how it works, but we will move much faster to cover a wider range of functionalities.

Next up, in Chapter 14  The PDF Reader we tackle our first augmented chatbot: a PDF question-answering system. We will build our own version of a vector store, and learn how to convert a large document into indexable chunks that can be retrieved at query time and injected into the bot prompt.

Leveling up a bit, in Chapter 15  The Answer Engine we build a search-powered chatbot that can browse the web and provide relevant answers with proper citations.

Then, in ?sec-shoping we will build a more advanced version of retrieval augmentation. This time is a shoping helper than can search items on behalf of the user, add them to the checkout cart, buy them, and track their delivery status—all based on a simulated online store. We’ll learn how to teach an LLM to call methods from an arbitrary API and write our own plugin system.

In Chapter 17  The Data Analyst we start playing with chatbots that can execute code on their own. We’ll build a very simple data analyst that can answer questions from a CSV file by running Pandas queries and generating charts. We’ll learn how to decide whether a textual response or a code/chart is more appropriate and how to generate structured responses.

Next, in Chapter 18  The Hackerbot we’ll take a quick detour from our standard UI to build a command-line bot that has access to your terminal and can run commands on it, so you’ll never have to google how to unzip a tar.gz file again.

Jumping from code execution to more traditional text generation, in Chapter 19  The Writer we’ll code a writing assistant that will help us create an article or text document in general. We’ll learn prompt techniques to summarize, outline, proofread, and edit textual content. We will also learn how to use an LLM to interactively modify a document, adding, modifying, or deleting arbitrary sections.

And then, building on top of that same workflow, in Chapter 20  The Coder we build a coding assistant, who can interact with a codebase adding, modifying, refactoring, and deleting code and files.

And that’s it! At least, that’s the plan for now.8 applications of LLMs to the most varied problems, many of which could become the Product Hunt product of the day if you take them to completion.