A CLI tool & API over the top 1221 Python libraries.
Used for library q/a & code generation with all available OpenAI models
Website
|
Data Visualizer
|
PyPI
|
@fleet_ai
fleet-context.mp4
Install the package and run context
to ask questions about the most up-to-date Python libraries. You will have to provide your OpenAI key to start a session.
pip install fleet-context
context
If you'd like to run the CLI tool locally, you can clone this repository, cd into it, then run:
pip install -e .
context
If you have an existing package that already uses the keyword context
, you can also activate Fleet Context by running:
fleet-context
You can download any library's embeddings and load it up into a dataframe by running:
from context import download_embeddings
df = download_embeddings("langchain")
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 901k/901k [00:00<00:00, 2.64MiB/s]
id dense_embeddings metadata sparse_values
0 91cd9f22-b3b6-49e1-8672-e1e42a1cf766 [-0.014795871, -0.013938751, 0.02374646, -0.02... {'id': '91cd9f22-b3b6-49e1-8672-e1e42a1cf766',... {'indices': [4279915734, 3106554626, 771291085...
1 80cd620e-7408-4649-aaa7-3fe3c719b4ed [-0.0027519625, 0.013772411, 0.0019546314, -0.... {'id': '80cd620e-7408-4649-aaa7-3fe3c719b4ed',... {'indices': [1497795724, 573857107, 2203090375...
2 87a406ad-e413-42fc-8813-6fa042f80f6a [-0.022883521, -0.0036436971, 0.0026068306, 0.... {'id': '87a406ad-e413-42fc-8813-6fa042f80f6a',... {'indices': [1558403699, 640376310, 358389376,...
3 8bdd8dae-8384-414d-87d2-4390ca29d857 [-0.024882555, -0.0041470923, -0.011419726, -0... {'id': '8bdd8dae-8384-414d-87d2-4390ca29d857',... {'indices': [1558403699, 3778951566, 274301652...
4 8cc5eb61-317a-4196-8099-51c47ef70406 [-0.036361936, 0.0027855083, -0.013214805, -0.... {'id': '8cc5eb61-317a-4196-8099-51c47ef70406',... {'indices': [3586802366, 1110127215, 161253108...
You can see a full list of supported libraries & search through them on our website at the bottom of the page.
If you'd like to directly query from our hosted vector database, you can run:
from context import query
results = query("How do I set up Langchain?")
for result in results:
print(f"{result['metadata']['text']}\n{result['metadata']['text']}")
[
{
'id': '859e8dff-f9ec-497d-aa07-344e48b2f67b',
'score': 0.848275101,
'values': [],
'metadata': {
'library_id': '4506492b-70de-49f1-ba2e-d65bd7048a28',
'page_id': '732e264c-c077-4978-bc93-380d7dc28983',
'parent': '3be9bbcc-b5d6-4a91-9f72-a570c2db33e5',
'section_id': '',
'section_index': 0.0,
'text': "Quickstart ## Installation\u200b To install LangChain run: - Pip - Conda pip install langchain conda install langchain -c conda-forge For more details, see our Installation guide. ## Environment setup\u200b Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs. First we'll need to install their Python package: pip install openai Accessing the API requires an API key, which you can get by creating an account and heading here.",
'title': 'Quickstart | 🦜️🔗 Langchain',
'type': '',
'url': 'https://python.langchain.com/docs/get_started/quickstart'
}
},
# ...and 9 more
]
You can also set a custom k value and filters by any metadata field we support (listed below), plus library_name
:
results = query("How do I set up Langchain?", k=15, filters={"library_name": "langchain"})
One of the biggest advantages of using Fleet Context's embeddings is the amount of information preserved throughout the chunking and embeddings process. You can take advantage of the metadata to improve the quality of your retrievals significantly.
Here's a full list of metadata that we support.
IDs:
library_id
: the uuid of the library referencedpage_id
: the uuid of the page the chunk was retrieved fromparent
: the uuid of the section the chunk was retrieved from (not to be confused with section_id)
Page/section information:
url
: the url of the section or page the chunk was retrieved from, formatted asf"{page_url}#{section_id}
section_id
: the section'sid
field from the htmlsection_index
: the ordering of the chunk within the section. If there are 2 chunks that have the same parent, this will tell you which one was presented first.
Chunk information:
title
: the title of the section or of the page (if section title does not exist)text
: the text, formatted in markdown. Note that markdown is removed from the embeddings for better retrieval results.type
: the type of the chunk. Can beNone
(most common) or a defined value likeclass
,function
,attribute
,data
,exception
, and more.
Re-ranking is commonly known to improve results pretty dramatically. We can take that a step further and take advantage of the fact that the ordering within each section/page is preserved, because it follows that ordering content in the order of which it is presented to the reader will likely derive the best results.
Use section_index
to do a smart reranking of your chunks.
If you notice 2 or more chunks with the same parent
field and are relatively similar in position on the page via section_index
, you can go up one level and query all chunks with the same parent
uuid and pass in the entire document.
On retrieval, you can map intent and filter via type
. If the user intends to generate code, you can pre-filter your retrieval to filter type
to just class
or function
. You can use this in creative ways. We've found that pairing it with OpenAI's function calling works really well.
Also, type
allow you to construct your prompt with more clarity, and display more rich information to the user. For example, adding the type to the prompt followed by the chunk will produce better results, because it allows the language model to understand what the chunk is trying to say.
Note that type
is not guaranteed to be present and defined for all libraries — only the ones that have had their documentation generated by Sphinx/readthedocs.
Our text
field preserves all information from the HTML elements by converting it to Markdown. This allows for two big advantages:
- From our tests, we've discovered that language models perform better with markdown formatting than without
- You're able to display rich information (titles, urls, images) to the user if you're sourcing a chunk
You can link the user to the exact section with url
(if supported, it's already pre-loaded with the section within the page).
You can use the -l
or --libraries
followed by a list of libraries to limit your session to a certain number of libraries. Defaults to all. View a list of all supported libraries on our website.
context -l langchain pydantic openai
You can select a different OpenAI model by using -m
or --model
. Defaults to gpt-4
. You can set your model to gpt-4-1106-preview
(gpt-4-turbo), gpt-3.5-turbo
, or gpt-3.5-turbo-16k
.
context -m gpt-4-1106-preview
You can use Claude, CodeLlama, Mistral, and many other models by
- creating an API key on OpenRouter (visit the Keys page after signing up)
- setting
OPENROUTER_API_KEY
as an environment variable - specifying your model using the company prefix, e.g.:
context -m phind/phind-codellama-34b
OpenAI models work this way as well; just use e.g. openai/gpt-4-32k
. Other model options are available here.
Optionally, you can attribute your inference token usage to your app or website by setting OPENROUTER_APP_URL
and OPENROUTER_APP_TITLE
. Your app will show on the homepage of https://openrouter.ai if ranked.
Local model support is powered by LM Studio. To use local models, you can use --local
or -n
:
context --local
You need to download your local model through LM Studio. To do that:
- Download LM Studio. You can find the download link here: https://lmstudio.ai
- Open LM Studio and download your model of choice.
- Click the ↔ icon on the very left sidebar
- Select your model and click "Start Server"
The context window is defaulted to 3000. You can change this by using --context_window
or -w
:
context --local --context_window 4096
You can control the number of retrieved chunks by using -k
or --k_value
(defaulted to 15), and you can toggle whether the model cites its source by using -c
or --cite_sources
(defaults to true).
context -k 25 -c false
We saw a 37-point improvement for gpt-4
generation scores and a 34-point improvement for gpt-4-turbo
generation scores amongst a randomly sampled set of 50 libraries.
We attribute this to a lack of knowledge for the most up-to-date versions of libraries for gpt-4
, and a combination of relevant up-to-date information to generate with and relevance of information for gpt-4-turbo
.
Check out our visualized data here.
You can download all embeddings here.