Using LLM Prompts for Source Attribution
Published on under the AI category.
My LLM chatbot is instructed to only reference sources that are provided in the prompt I send to OpenAI. This is essential because I want answers to reflect what I have written rather than more general facts available to ChatGPT 3.5 Turbo. One of my primary considerations when developing my LLM chatbot was to ensure that, where possible, the sources of claims were cited. I devised two methods that, combined, provide both myself and users with information about sources associated with a question:
- Ask the LLM to cite sources used to make a claim and;
- Provide a list of the top three most relevant pieces of source material that were provided in the prompt to the user.
Implementation
Asking the LLM itself to cite sources was a prudent approach for providing inline substantiation of claims. If the bot says "James brews coffee with a Chemex", for example, it can provide a link to the source where an assertion to this affect is made, rather than being a separate part of the page. Indeed, to the extent relevant context can be fit in the answer, the better. I decided that I wanted sources to be cited like this:
This is an answer. (This is a source)
All of the sources that I provide to the bot appear in an expanded format, like this:
Source Text (Source: <a href="Source URL">Source Title</a>, 2020-01-01)
To ensure the bot cited sources in the aforementioned format, I spent a lot of time trying different instructions in my initial prompt. I found that providing both direct instructions, written as concisely as possible, as well as examples accompanying the structure of sources and the desired outcome, helped improve the chances that the LLM cited sources in the desired way. Examples of this are provided in the System Prompt below.
Accompanying the instruction to use sources provided in my initial prompt, I include instructions of what not to do. For example, I ask the LLM to only provide quotes that are from the source prompt; quotations must be direct from the source. Links should only be added if they are provided in the sources themselves. This means links can be added if they were mentioned in the underlying source (i.e. I mention a draft memo in one of the sources, so that can be linked) or if they are part of the source attribution metadata that is paired with each source.
Here is the full System Prompt I provide to OpenAI with specific instructions on how to cite sources:
``` You can use the text provided below help you answer. If you are not confident with your answer, say 'I don't know' then stop
You are not allowed to add links from sites that are not mentioned in the Sources.
Citations must replace the keyword in the source text. Do not cite like "(Source: )".
For example, if you want to cite the first source, you would write: "concept..." then continue with your text. Do not put links at the end of your answer.
All links must be in HTML tags.
Include a maximum of three citations. Only cite from URLs in this prompt. Never cite from another site.
If you use a fact from the system prompt, cite it with the following format: "James' homepage". Provide quotes to substantiate your claims, only from this prompt, never from elsewhere. Cite quotes with the following format: "page title".
Sources use this format:
Source Text (Source: Source Title, 2020-01-01)
Here is an example source:
James is a writer. (Source: James' homepage, 2020-01-01) If you were to cite this, you would say:
James is a writer. (James' homepage, 2020-01-01)
[STOP] means end of sources.
Quotations from Sources may be used to substantiate your claims, as long as they are cited. ```
This instruction provides direction for the model, but I need to format the Assistant prompt in the right way so that these instructions make sense. Here is the high-level template I use to format a query that a user provides:
``` Answer the question '[[[QUERY]]]?'.
If you use text in a section to make a statement, you must cite the source in a HTML tag. The text in the Sources section is formatted with a URL and a passage. You can only cite sources that are in the Sources section. The anchor text must be the title of source. You must never generate the anchor text.
Use the Sources text below, as well as your facts above, to answer. Sources have dates at the end. You should prefer more recent information. And add a caveat such as "this may be out of date since my Source was published on [date]", where [date] is the date on which the source was published. if you are citing information older than one year from [[[CURRENT_DATE]]]
[STOP] means end of sources.
Sources
[[[SOURCES]]] [STOP] ```
All text in three square brackets is substituted at query run time with the requisite information. Notice that the Sources section is under its own heading. I read that adding section delimiters can be helpful, although I did not run an A/B test to validate this hypothesis.
An example source is:
My name is James. (<a href="https://jamesg.blog">James' homepage</a>, 2020-01-01)
All of these sources -- up to 10, depending on what will fit in the prompt -- are added to the Sources section and sent back by the model.
Evaluation
In my experimentation, the aforementioned prompts have performed well in terms of returning information in the right formats. However, sometimes the LLM makes up a claim and cites a source even if the source does not substantiate the claim. In other cases, the LLM makes a correct statement but cites the wrong piece of reference material as the soucre.
Given the question "what parks have you visited in sf?", my chatbot returned:
I have visited Presidio, Buena Vista, Golden Gate, Fort Mason, and Crissy Field. These are the parks I've explored so far in San Francisco. However, given that my source was published on February 27, 2023, this information may not be up to date. (For how long are you staying in San Francisco? 10 minutes.)
The linked resource does not fully substantiate this claim. However, because I have visibility into the top three most relevant sources that were fed into the Assistant prompt, I can look and see that there is a source in there that substantiates this claim. This attests to the value of keeping track of the sources that were sent into the original prompt. I can dive deeper to figure out why a bot might have returned a response, in terms of fact-checking the source material.
Is this solution perfect? No. I have been experimenting with asking GPT to verify whether the response of a the original prompt is substantiated by a source. I have been working on this as part of an evaluation framework for my bot, but there isn't a full integration with the rest of the system yet. Directionally, I want to ask the LLM if the output from the original prompt is substantiated by the source. Thus, the back-end flow will look something like:
- User provides query;
- Prompt is constructed with reference to relevant sources, as determined by semantic similarity between embeddings for each source;
- Response is received;
- New prompt is sent with the prompt response as well as the sources, tasked with checking if any of the sources directly substantiate the prompt response, then;
- Reject a response and say to the user "I don't know" or something to that effect if the model is unsure or finds that the sources don't provide evidence to support the claim made in the original prompt response.
I'm excited to explore this area further. If you have any tips on how to prevent hallucination and build a UX with accurate, intuitive, citations, let me know!
Responses
Comment on this post
Respond to this post by sending a Webmention.
Have a comment? Email me at readers@jamesg.blog.