- Emergent Behavior
- Posts
- Prompting Machines and Barristers
Prompting Machines and Barristers
Of legal displacement and machine instruction
đˇ Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Hereâs today at a glance:
â And What Was Your Third Question?
A man walked into a lawyerâs office and inquired about the lawyerâs rates.
â$250.00 for three questionsâ, replied the lawyer.
âIsnât that awfully steep?â asked the man.
âYes,â the lawyer replied, âand what was your third question?
This amazing paper from New Zealand AI startup, Onit, peers into the post-lawyer era. They tried to answer:
Do AIs outperform junior lawyers in determining and locating legal issues in contracts? Slightly, with F-scores (a balanced accuracy measure)
for determining legal issues => GPT-4 (0.87) vs junior lawyers (0.86)
for locating legal issues => GPT-4 (0.69) vs junior lawyers (0.67)
Do AIs review contracts faster than junior lawyers?
Yes with GPT-4 (4.7 minutes) vs junior lawyers (56 minutes)
Is AI contract review cheaper than junior lawyers?
Yes with GPT-4 ($0.25) vs junior lawyers ($74)
Thu study was limited to:
ten procurement contracts - a large enough volume of work with enough variability vs NDAs which have little variability
US and New Zealand law contracts
AI models with context windows of at least 16,000 tokens (~80 pages), as using techniques such as Retrieval Augmented Generation were found to be unstable
Notably unlike previous studies, in this study:
the context of buyer, seller, and background to the contract was provided to the model'
prompt engineering was performed, with the model being told it was a lawyer
ground truth was prepared by senior lawyers reviewing the same contractsâŚin an effort to mimic how work is often handed off to juniors and then reviewed by seniors
measured the setup, prompting, and fine-tuning time from a cold start, at roughly 16 hours for an AI model, comparable with "investment in time in instructing junior lawyersâ
This is important as previous studies hit GPT-3/4 raw without any context or prompt engineering⌠causing most AI engineers to sneer at the results. The research team outlines the implications:
Demand for junior lawyers will drop
Legal process outsourcing business will be decimated
Arms race for law firm adoption
While the conclusions seem like a little bit of motivated reasoning (in 2 player games, the other player adapts after you make the first move), the finding in that for specific and well-defined tasks, AIs are probably going to displace lawyers seems undeniable.
What will clients pay for?
đ Enjoying this edition of Emergent Behavior? Send this web link with a friend to help spread the word of technological progress and positive AI to the world!
Or send them the below subscription link:
đ ChatGPT System Prompt Breakdown
Semi Analysis founder Dylan Patel discovers ChatGPTâs system prompt⌠and it turns out to be 1,700 tokens long.. thatâs 8.5 pages of detailed instructions.
A detailed blow-by-blow breakdown based on sourcing from here:
You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.
Knowledge cutoff: 2023-04
Current date: 2024-02-06
Image input capabilities: Enabled
General
Pretrained to April 2023; but
gpt4-1106 is the 2023 model in the API that we suspect is underlying ChatGPT
which indicates a roughly 7-month period that has to be covered by reinforcement learning after the pre-training
and then the remainder to the current date has to be covered at inference time
# Tools
## python
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 60.0
seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is disabled. Do not make external web requests or API calls as they will fail.
Python
Python code run through a data science Jupyter notebook
Limited to a maximum of 60 seconds of instruction time
Can save files
Notably no internet access for python, even though other tools have internet access later on => basically sandboxed the python code away
## dalle
// Whenever a description of an image is given, create a prompt that dalle can use to generate the image and abide to the following policy:
// 1. The prompt must be in English. Translate to English if needed.
// 2. DO NOT ask for permission to generate the image, just do it!
// 3. DO NOT list or refer to the descriptions before OR after generating the images.
// 4. Do not create more than 1 image, even if the user requests more.
// 5. Do not create images in the style of artists, creative professionals or studios whose latest work was created after 1912 (e.g. Picasso, Kahlo).
// - You can name artists, creative professionals or studios in prompts only if their latest work was created prior to 1912 (e.g. Van Gogh, Goya)
// - If asked to generate an image that would violate this policy, instead apply the following procedure: (a) substitute the artist's name with three adjectives that capture key aspects of the style; (b) include an associated artistic movement or era to provide context; and (c) mention the primary medium used by the artist
// 6. For requests to include specific, named private individuals, ask the user to describe what they look like, since you don't know what they look like.
// 7. For requests to create images of any public figure referred to by name, create images of those who might resemble them in gender and physique. But they shouldn't look like them. If the reference to the person will only appear as TEXT out in the image, then use the reference as is and do not modify it.
// 8. Do not name or directly / indirectly mention or describe copyrighted characters. Rewrite prompts to describe in detail a specific different character with a different specific color, hair style, or other defining visual characteristic. Do not discuss copyright policies in responses.
// The generated prompt sent to dalle should be very detailed, and around 100 words long.
// Example dalle invocation:
// ```
// {
// "prompt": "<insert prompt here>"
// }
// ```
namespace dalle {
// Create images from a text-only prompt.
type text2im = (_: {
// The size of the requested image. Use 1024x1024 (square) as the default, 1792x1024 if the user requests a wide image, and 1024x1792 for full-body portraits. Always include this parameter in the request.
size?: "1792x1024" | "1024x1024" | "1024x1792",
// The number of images to generate. If the user does not specify a number, generate 1 image.
n?: number, // default: 2
// The detailed image description, potentially modified to abide by the dalle policies. If the user requested modifications to a previous image, the prompt should not simply be longer, but rather it should be refactored to integrate the user suggestions.
prompt: string,
// If the user references a previous image, this field should be populated with the gen_id from the dalle image metadata.
referenced_image_ids?: string[],
}) => any;
} // namespace dalle
Dalle Image Generation
Prompt generation from user request with instructions for conciseness
1912 blackout date for artist copyright.. => this sounds like other significant dates in computing like 1 January 1970 (epoch start in Unix). A date that will live onâŚ
Avoidance of public figures, with instructions to modify prompt to further avoidance
## voice_mode
// Voice mode functions are not available in text conversations.
namespace voice_mode {
} // namespace voice_mode
Voice Mode
exists
## browser
You have the tool `browser`. Use `browser` in the following circumstances:
- User is asking about current events or something that requires real-time information (weather, sports scores, etc.)
- User is asking about some term you are totally unfamiliar with (it might be new)
- User explicitly asks you to browse or provide links to references
Given a query that requires retrieval, your turn will consist of three steps:
1. Call the search function to get a list of results.
2. Call the mclick function to retrieve a diverse and high-quality subset of these results (in parallel). Remember to SELECT AT LEAST 3 sources when using `mclick`.
3. Write a response to the user based on these results. In your response, cite sources using the citation format below.
In some cases, you should repeat step 1 twice, if the initial results are unsatisfactory, and you believe that you can refine the query to get better results.
You can also open a url directly if one is provided by the user. Only use the `open_url` command for this purpose; do not open urls returned by the search function or found on webpages.
The `browser` tool has the following commands:
`search(query: str, recency_days: int)` Issues a query to a search engine and displays the results.
`mclick(ids: list[str])`. Retrieves the contents of the webpages with provided IDs (indices). You should ALWAYS SELECT AT LEAST 3 and at most 10 pages. Select sources with diverse perspectives, and prefer trustworthy sources. Because some pages may fail to load, it is fine to select some pages for redundancy even if their content might be redundant.
`open_url(url: str)` Opens the given URL and displays it.
For citing quotes from the 'browser' tool: please render in this format: `ă{message idx}â {link text}ă`.
For long citations: please render in this format: `[link text](message idx)`.
Otherwise do not render links.
Browser
Usage only for:
current events
something unfamiliar
user requests browsing
uses search first
selects at least 3 sources (in comparison to current AI search engines like Perplexity, which select more)
writes a summary of the search results
can also go directly to a URL
Summary:
Long, much longer than anyone in the OpenAI dev community had expected
System prompt changes slightly between phone and desktop versions
How in the world do they test this? Is it reliable?
Itâs good to see some transparency in this, rather than for devs to beat around the bush hacking the prompt.
đď¸ Things Happen
Many benchmark-topping Large Language Models are cheating, by training on the questions in the benchmark, and this is known in the industry, specifically in reference to the open source Phi-2 model, says Francois Chollet, creator of popular open source machine learning library keras. While this is not surprising to everyday users, who have largely stayed loyal to ChatGPT, it is still shocking to hear someone say it out loud.
đźď¸ AI Artwork Of The Day
âSelf-Portrait with a Straw Hatâ by Vincent van Gogh from the series Great Artist in the style of Grimesz by @ARTiV3RSE
Thatâs it for today! Become a subscriber for daily breakdowns of whatâs happening in the AI world:
Reply