Week 1 |
Introductory lecture
Introduces the module, outlining its relevance to the field and connections to other topics. It provides an overview of the content structure, key references, and assessment details. |
Week 2 |
LLM Integration Techniques
Master the integration of Large Language Models into your applications to unlock the potential of AI in processing and generating human language. Understand why seamless integration is crucial for developing intelligent systems that interact naturally with users. By effectively connecting LLMs with your software, you lay the groundwork for advanced functionalities, enabling your applications to leverage state-of-the-art language capabilities and setting the stage for more complex AI workflows. |
Week 3 |
Managing Conversation Context
Learn to manage conversation context effectively to maintain natural and coherent interactions with users over multiple turns. Discover why handling conversation state is vital for creating applications that remember prior exchanges and provide contextually relevant responses. By mastering techniques for tracking and utilising conversation history, you enhance user experience and enable your systems to engage in more meaningful, human-like dialogues. |
Week 4 |
Advanced Prompt Engineering
Explore advanced prompting techniques to guide LLMs in generating accurate and relevant responses. Understand why crafting effective prompts is essential for eliciting desired behaviours from language models. By mastering task decomposition, iterative prompting, and recursive query generation, you can better control LLM outputs, ensuring your applications provide valuable and context-appropriate information to users. |
Week 5 |
Developing LLM Agents
Delve into building intelligent agents that leverage LLMs for autonomous task handling. Learn why agent development is key to creating systems that can perform complex tasks without constant human guidance. By integrating frameworks like LangChain and LlamaIndex, you enable your applications to use custom tools, manage sub-tasks, and intelligently determine when to involve human intervention, enhancing efficiency and functionality. |
Week 6 |
Incorporating Human Oversight
Understand the importance of integrating human oversight into LLM-driven workflows to ensure accuracy and reliability. Explore why incorporating strategies for human validation and feedback enhances your applications' performance. By designing Human-in-the-Middle workflows, you balance automation with human expertise, addressing critical concerns like data privacy and bias while improving the overall quality of your AI systems. |
Week 7 |
Multi-Modal LLM Integration
Expand your applications' capabilities by integrating multi-modal LLMs that process text, images, and audio. Discover why leveraging models like CLIP and GPT-4 enriches user interactions and enables more comprehensive data analysis. By incorporating multi-modal inputs, you create versatile AI systems that can interpret and respond to a variety of information sources, meeting diverse user needs. |
Week 8 |
Semantic Retrieval and Embeddings
Master semantic retrieval techniques to enable your applications to understand and process large volumes of data effectively. Learn why using embeddings and vector storage is crucial for capturing semantic similarity and enhancing information retrieval. By comparing embedding models and exploring vector stores like FAISS, Chroma, and Pinecone, you equip your systems to provide accurate and relevant responses based on extensive datasets. |
Week 9 |
Secure Data Integration
Prioritise data security and privacy when integrating LLMs with databases. Understand why managing data flow securely is essential for protecting sensitive information. By employing techniques such as Reversible Anonymisation, Data Masking, and Tokenisation, you ensure that your applications handle data responsibly, maintaining user trust and complying with legal and ethical standards. |
Week 10 |
Fine-Tuning and Deployment
Develop expertise in fine-tuning LLMs to customise their behaviour for specific applications. Learn why fine-tuning methods like instruction tuning, SFT, LoRa, and PEFT are vital for optimising model performance. By mastering frameworks like Hugging Face and NVIDIA's NeMo, and understanding distributed fine-tuning techniques, you prepare to scale and deploy AI solutions that are tailored to your needs. |