
The story so far. ChatGPT, or other large language models, is not ideal for information retrieval alone because it often hallucinates or fabricates information, including references.
The story so far. ChatGPT, or other large language models, is not ideal for information retrieval alone because it often hallucinates or fabricates information, including references.
Warning: I am not a data or information scientist; this is a new area I am trying to learn about The rise of large language models, such as GPT-3, GPT-4, ChatGPT, and BERT, has dramatically improved information retrieval in recent years.
Warning: I wrote this to explain to myself high-level technical details in deep learning, it is likely to include inaccuracies.
Warning : Speculative piece! I recently did a keynote at the IATUL 2023 conference where I talked about the possible impact of large language models (LLMs) on academic libraries.
In my last blog post, I tried to identify seminal papers using a variety of methods. These were divided into two main categories.
I was recently asked a question - how do you find seminal papers/work/research?
As academic librarians helping early-stage researchers (Masters, Phds students), we are often asked to provide guidance on the literature review process in one shot classes.
As I write this, OpenAI has just unleashed ChatGPT - their GPT3.5 Large Language Model(LLMs) for about a month, and the online world is equal parts hype and confusion.
Epistemic status : I have been reading on and off technical papers on large language models since 2020, mostly get the gist but don't understand the deepest technical details. I have written and published on academic discovery search for most of my librarianship career since 2008. Since the 2000s the way search engines have worked has not changed.
I've started seriously studying and blogging about Open Access since 2012, starting off by reading books by Walt Crawford and Peter Suber and since then I've continued to read and muse about the issue covering everything from
I recently came across "Automated citation recommendation tools encourage questionable citations" (which was first brought to my attention by this blog post) an exceedingly thought-provoking article about bias in discovery tools, particularly new ones that can