- Hustle Hub
- Posts
- Hustle Hub #36
Hustle Hub #36
🛖 How To Build LLM Applications with LangChain
New to Hustle Hub? Make sure to subscribe for more!
Read Time: 4.5 minutes
Hey hustler,
Last week I attended a meetup on how to build LLM applications using LangChain. It was an amazing turnout with 300+ people from the data community in Singapore.
Most importantly, I learned so much from the speakers (one of them is actually a teacher! 🤯) and I want to share my takeaways with you.
If you didn’t manage to join the meetup, no worries. 🤝🏻
In today's issue, I’d like to share with you what I learned from the speakers when it comes to learning, building and deploying LLM applications with LangChain.
Let’s get to it! 🚀
🦜🔗 How To Build LLM Applications with LangChain
Speaker 1: Ivan Tan
Ivan oversees and coordinates a wide array of recurring events at DataScience SG. As the Head of Data at Tech in Asia, he leads a cross-functional data science, engineering and analytics team to build and transform functional departments into data-driven powerhouses. With over half a decade of experience working in the thriving Asian startup ecosystem, he is also an independent advisor to several AI startups.
🧠 Here are my takeaways from Ivan:
Building AI applications has never been easier. The barriers to entry are much lower given the rise of LangChain (and many other frameworks).
Despite a gazillion metrics for evaluating LLMs, human evaluation is still the best. If it works, it works.
Always start with prompt engineering before fine-tuning an LLM model. In most cases, you won’t need to train a model from scratch.
Always know what problem you’re trying to solve. AI is not magic to solve any problem. Problem first, solution later.
👉🏻 If you want to get Ivan’s Google Colab notebook on LangChain, grab it HERE.
Speaker 2: Kuang Wen
Kuang Wen is currently an educator (Chemistry teacher) who likes to tinker with code and build applications to bring tools to enhance teaching and learning in and out of class, as well as to decrease the workload of fellow educators.
In his sharing, he talked about the design process behind an AI Chemistry tutor using LangChain as a framework, the challenges of building the AI tutor, workarounds, and lessons learnt.
🧠 Here are my takeaways from Kuang Wen:
Building AI applications is already accessible to everyone, and Kuang Wen is one of the best examples. He is a Chemistry teacher who sees a problem at work, and he built an AI Chemistry Tutor to solve the problem.
It has never been easier to build something to solve meaningful problems than NOW. English is the new programming language.
👉🏻 If you want to get Kuang Wen’s slides and Google Colab notebook, grab it HERE.
Speaker 3: Daniel
Daniel is a full-time national serviceman (NSF). Ever since picking up practical machine learning at the end of J1, he has written several articles published in Towards Data Science, as well as conducted a class in RI on the fast.ai library. He has won bronze in a Kaggle competition, and has been using Langchain to develop LLM applications in EdTech since Jan ‘23. He is part of String, a tech discovery platform for public officers. He is passionate about applying ML in sports science and education.
In his sharing, he talked about how to build POCs with LangChain and improve user experience.
🧠 Here are my takeaways from Daniel:
You can visualise LangChain’s workflow using langflow. This is useful when your workflow becomes more complex.
To build a quick POC using LangChain, you can use Python to build the backend and Streamlit/Gradio for the frontend.
Always use low/no-code tools to build POCs for quicker iteration at a lower cost.
For a better user experience and lower cost, you can use caching to reduce response time (lower data latency).
👉🏻 If you want to get Daniel’s PPT slides, grab it HERE.
Speaker 4: Martin Andrews
Martin has a PhD in Machine Learning, and has been an Open Source advocate since 1999. After a career in finance (based in London and New York), he decided to follow his original passion, and now works on Machine Learning / Artificial Intelligence full-time in Singapore, as the Head of AI at Red Dragon AI. Martin is a co-organiser of the Machine Learning Singapore MeetUp group and is a regular speaker at events in the region.
In his sharing, he talked about how LLM models can self-improve using various novel approaches.
🧠 Here are my takeaways from Martin:
In order to build a more accurate LLM model, having better data quality is more important than building a larger (or more complex) model.
👉🏻 If you want to get Martin’s PPT slides, grab it HERE.
Speaker 5: Sam Witteveen
Sam has extensive experience in startups and mobile applications and is helping developers and companies create smarter applications with machine learning. He has used LangChain from close to its inception and his YouTube channel includes 20+ videos that the official LangChain site lists in their LangChain tutorials section. Along with being a co-organizer of ML Singapore, he is also one of the mentors for the Google for Startups program in Asia and North America.
In his sharing, he talked about how to build advanced LLM agents. Most importantly, he shared the importance of using an approach called Recursive Criticism and Improvement (RCI) to evaluate a model’s intermediate output to make sure the result is accurate.
🧠 Here are my takeaways from Sam:
Prompt engineering plays a huge role in giving accurate output. Always try to break a big task into smaller tasks. Make each task as simple as possible so that your LLM model can execute them with better accuracy.
To improve your LLM model’s reasoning capability, it’s important to use RCI to constantly critique its own intermediate outputs so you get better results.
Conclusion
Given the rise of LangChain and many other frameworks (i.e. LlamaIndex), it’s becoming more and more easy to build AI applications to solve meaningful problems.
If you’re thinking of using LangChain to build your own LLM application — be it for your work or personal projects — I’d highly recommend learning the fundamentals from this course co-created by Andrew Ng and Harrison Chase (the creator of LangChain)
While the development of LangChain is moving at breakneck speeds, the good news is that you don’t need to force yourself to keep updated with the latest development.
Just start building and you’ll figure out the rest. Keep building, keep hustling! 🚀
🧠 Today’s newsletter is slightly different compared to the previous ones as I focused only on one topic (instead of a 5-point format). Do you prefer this format or 5-point format?
Reply to this email and let me know. I read and reply to every email. 🤝🏻
🚀 Whenever you’re ready, there are 4 ways I can help you:
Book a coaching call with me if you need help in the following:
• How To Get Into Data Science
• LinkedIn Growth, Content Strategy & Personal Branding
• 1:1 Mentorship & Career Guidance
• Resume Review
Promote your brand to 1,400+ subscribers in the data/tech space by sponsoring this newsletter.
Watch my YouTube videos where I talk about data science tips, programming, and my tech life (P.S. Don’t forget to like and subscribe 💜).
Follow me on LinkedIn and Twitter for more data science career insights, my mistakes and lessons learned from building a startup.
That's all for today
Thanks for reading. I hope you enjoyed today's issue. More than that, I hope it has helped you in some ways and brought you some peace of mind.
You can always write to me by simply replying to this newsletter and we can chat.
See you again next week.
- Admond
Reply