Date:

LLMs: When to Use Them

Reasonable Use Cases

Speeding up Work Tasks

LLMs are good at tasks such as:

  • coming up with names for things
  • classifying items in a long list
  • formatting data (e.g. converting to CSV/JSON, or different date formats)
  • data extraction from unstructured text (e.g. email addresses or URLs)
  • rephrasing or adjusting the tone of your writing

Code Assistants

Code assistants are good for quick prototyping, providing you understand the code they are generating. You will have better results if you split the task into very small steps and commit often. Perhaps consider using the Mikado method with the LLM.

If AI is generating the code, I believe you should write the unit tests yourself, so that you are forced to check its correctness.

You should not use code assistant tools or paste non-public code into LLMs without your employer’s (or the copyright holder’s) permission.

I think code assistants are a bad idea for learning new frameworks and libraries, for several reasons:

  • it discourages you from reading the docs and forming a good mental model of how the thing works
  • it’s not always smart enough to fix bugs for you or explain why your code isn’t working
  • you can’t recognize when the coding style is outdated or there is simpler way of doing things

I think refactoring is best done by hand unless you want to apply a single refactoring many times across a large codebase. AI generated refactorings are not safe and need checking for correctness.

Retrieval-Augmented Generation (RAG)

A RAG is a multi-step process that first uses word vectors to fetch content from a knowledge base, and then feeds it to an LLM to answer a user’s question. For question answering, I’d expect this to outperform traditional search in cases where information is scattered amongst a lot of similar looking documents, for example slack messages or helpdesk tickets. It only really makes sense if your knowledge base is large enough that it would be costly for a technical writer to trawl and summarise.

I’m a bit skeptical of building such systems in-house though – as opposed to something like RunLLM – it feels like this effort would be better invested in improving your own service or its documentation.

Questionable Use Cases

LLMs are not Oracles

LLMs are not good at:

  • doing research for you
  • communicating factual information
  • weighing up evidence

It is nonsense to ask an LLM for opinions on ideas, because LLMs can support any position depending on their prompt and context.

Building User-Facing Services on Top of LLMs is Risky

  • LLMs are costly to train and run due to the amount of compute required. This has a high energy cost, to the point that big tech companies have walked back their commitments to carbon neutrality in order to expand data centers. I wouldn’t be surprised if companies hike up prices as the technology matures.
  • LLM outputs cannot be trusted to be free of copyrighted or sensitive data without more transparency over how they were trained
  • Allowing LLMs to act as “agents” is open to abuse from prompt injection attacks, and they can be misled by untrustworthy information
  • LLMs will happily lie to customers
  • It might be possible for AI to perform more complex reasoning by chaining many LLM operations, but I think this is unproven and expensive at this point

Conclusion

While LLMs have the potential to speed up certain tasks, it’s essential to be aware of their limitations and not overhype their capabilities. As technologists, we should critically evaluate the use cases for LLMs and consider what they are actually good at and what they are not good at.

FAQs

Q: Are LLMs good for doing research for me?

A: No, LLMs are not good at doing research for you. They can provide information, but it’s not reliable and should not be considered as a replacement for human research.

Q: Can I use LLMs to communicate factual information?

A: No, LLMs are not good at communicating factual information. They can provide information, but it’s not reliable and should not be considered as a replacement for human communication.

Q: Are LLMs a good idea for learning new frameworks and libraries?

A: No, LLMs are not a good idea for learning new frameworks and libraries. They can provide code suggestions, but they are not a replacement for reading the documentation and forming a good mental model of how the thing works.

Q: Can I use LLMs to generate code for me?

A: Yes, LLMs can generate code for you, but you should be aware of the limitations and potential risks involved. You should also write unit tests yourself to ensure the code is correct and not use LLMs for non-public code without permission.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here