ARTICLE AD BOX
Jupyter AI brings generative AI capabilities correct into nan interface. Having a section AI adjunct ensures privacy, reduces latency, and provides offline functionality, making it a powerful instrumentality for developers. In this article, we’ll study really to group up a section AI coding adjunct successful JupyterLab utilizing Jupyter AI, Ollama and Hugging Face. By nan extremity of this article, you’ll person a afloat functional coding adjunct successful JupyterLab tin of autocompleting code, fixing errors, creating caller notebooks from scratch, and overmuch more, arsenic shown successful nan screenshot below.

⚠️ Jupyter AI is still nether dense development, truthful immoderate features whitethorn break. As of penning this article, I’ve tested nan setup to corroborate it works, but expect potential changes arsenic nan task evolves. Also nan capacity of nan adjunct depends connected nan exemplary that you prime truthful make judge you take nan 1 that is fresh for your usage case.
First things first — what is Jupyter AI? As nan sanction suggests, Jupyter AI is simply a JupyterLab hold for generative AI. This powerful instrumentality transforms your modular Jupyter notebooks aliases JupyterLab situation into a generative AI playground. The champion part? It besides useful seamlessly successful environments for illustration Google Colaboratory and Visual Studio Code. This hold does each nan dense lifting, providing entree to a assortment of exemplary providers (both unfastened and closed source) correct wrong your Jupyter environment.

Setting up nan situation involves 3 main components:
- JupyterLab
- The Jupyter AI extension
- Ollama (for Local Model Serving)
- [Optional] Hugging Face (for GGUF models)
Honestly, getting nan adjunct to resoluteness coding errors is nan easy part. What is tricky is ensuring each nan installations person been done correctly. It’s truthful basal you travel nan steps correctly.
1. Installing nan Jupyter AI Extension
It’s recommended to create a new environment specifically for Jupyter AI to support your existing situation cleanable and organised. Once done travel nan adjacent steps. Jupyter AI requires JupyterLab 4.x aliases Jupyter Notebook 7+, truthful make judge you person nan latest type of Jupyter Lab installed. You tin install/upgrade JupyterLab pinch pip aliases conda:
# Install JupyterLab 4 utilizing pip pip instal jupyterlab~=4.0Next, instal nan Jupyter AI hold arsenic follows.
pip instal "jupyter-ai[all]"This is nan easiest method for installation arsenic it includes each supplier limitations (so it supports Hugging Face, Ollama, etc., retired of nan box). To date, Jupyter AI supports nan pursuing model providers :

If you encounter errors during nan Jupyter AI installation, manually instal Jupyter AI utilizing pip without nan [all] optional dependency group. This measurement you tin power which models are disposable successful your Jupyter AI environment. For example, to instal Jupyter AI pinch only added support for Ollama models, usage nan following:
pip instal jupyter-ai langchain-ollamaThe limitations dangle upon nan exemplary providers (see array above). Next, restart your JupyterLab instance. If you spot a chat icon connected nan near sidebar, this intends everything has been installed perfectly. With Jupyter AI, you tin chat pinch models aliases usage inline magic commands straight wrong your notebooks.

2. Setting Up Ollama for Local Models
Now that Jupyter AI is installed, we request to configure it pinch a model. While Jupyter AI integrates pinch Hugging Face models directly, immoderate models may not activity properly. Instead, Ollama provides a much reliable measurement to load models locally.
Ollama is simply a useful instrumentality for moving Large Language Models locally. It lets you download pre-configured AI models from its library. Ollama supports each awesome platforms (macOS, Windows, Linux), truthful take nan method for your OS and download and instal it from nan charismatic website. After installation, verify that it is group up correctly by running:
Ollama --version ------------------------------ ollama type is 0.6.2Also, guarantee that your Ollama server must beryllium moving which you tin cheque by calling ollama serve astatine nan terminal:
$ ollama serve Error: perceive tcp 127.0.0.1:11434: bind: reside already successful useIf nan server is already active, you will spot an correction for illustration supra confirming that Ollama is moving and successful use.
Option 1: Using Pre-Configured Models
Ollama provides a library of pre-trained models that you tin download and tally locally. To commencement utilizing a model, download it utilizing nan pull command. For example, to usage qwen2.5-coder:1.5b, run:
ollama propulsion qwen2.5-coder:1.5bThis will download nan exemplary successful your section environment. To corroborate if nan exemplary has been downloaded, run:
ollama listThis will database each nan models you’ve downloaded and stored locally connected your strategy utilizing Ollama.
Option 2: Loading a Custom Model
If nan exemplary you request isn’t disposable successful Ollama’s library, you tin load a civilization exemplary by creating a Model File that specifies nan model’s source.For elaborate instructions connected this process, mention to nan Ollama Import Documentation.
Option 3: Running GGUF Models straight from Hugging Face
Ollama now supports GGUF models straight from nan Hugging Face Hub, including some nationalist and backstage models. This intends if you want to usage GGUF exemplary straight from Hugging Face Hub you tin do truthful without requiring a civilization Model File arsenic mentioned successful Option 2 above.
For example, to load a 4-bit quantized Qwen2.5-Coder-1.5B-Instruct exemplary from Hugging Face:
1. First, alteration Ollama nether your Local Apps settings.

2. On nan exemplary page, take Ollama from nan Use this exemplary dropdown arsenic shown below.

We are almost there. In JupyterLab, unfastened nan Jupyter AI chat interface connected nan sidebar. At nan apical of nan chat sheet aliases successful its settings (gear icon), location is simply a dropdown aliases section to prime nan Model provider and exemplary ID. Choose Ollama arsenic nan provider, and participate nan exemplary sanction precisely arsenic shown by Ollama database successful nan terminal (e.g. qwen2.5-coder:1.5b). Jupyter AI will link to nan section Ollama server and load that exemplary for queries. No API keys are needed since this is local.
- Set Language model, Embedding exemplary and inline completions models based connected nan models of your choice.
- Save nan settings and return to nan chat interface.

This configuration links Jupyter AI to nan locally moving exemplary via Ollama. While inline completions should beryllium enabled by this process, if that doesn’t happen, you tin do it manually by clicking connected nan Jupyternaut icon, which is located successful nan bottommost barroom of nan JupyterLab interface to nan near of nan Mode indicator (e.g., Mode: Command). This opens a dropdown paper wherever you tin prime Enable completions by Jupyternaut to activate nan feature.

Once group up, you tin usage nan AI coding adjunct for various tasks for illustration codification autocompletion, debugging help, and generating caller codification from scratch. It’s important to statement present that you tin interact pinch nan adjunct either done nan chat sidebar aliases straight successful notebook cells utilizing %%ai magic commands. Let’s look astatine some nan ways.
Coding adjunct via Chat interface
This is beautiful straightforward. You tin simply chat pinch nan exemplary to execute an action. For instance, present is really we tin inquire nan exemplary to explicate nan correction successful nan codification and past subsequently hole nan correction by selecting codification successful nan notebook.

You tin besides inquire nan AI to make codification for a task from scratch, conscionable by describing what you request successful earthy language. Here is simply a Python usability that returns each premier numbers up to a fixed affirmative integer N, generated by Jupyternaut.

Coding adjunct via notebook compartment aliases IPython shell:
You tin besides interact pinch models straight wrong a Jupyter notebook. First, load nan IPython extension:
%load_ext jupyter_ai_magicsNow, you tin usage nan %%ai compartment magic to interact pinch your chosen connection exemplary utilizing a specified prompt. Let’s replicate nan supra illustration but this clip wrong nan notebook cells.

For much specifications and options you tin mention to nan charismatic documentation.
As you tin gauge from this article, Jupyter AI makes it easy to group up a coding assistant, provided you person nan correct installations and setup successful place. I utilized a comparatively mini model, but you tin take from a assortment of models supported by Ollama aliases Hugging Face. The cardinal advantage present is that utilizing a section exemplary offers important benefits: it enhances privacy, reduces latency, and decreases dependence connected proprietary exemplary providers. However, moving large models locally pinch Ollama tin beryllium resource-intensive truthful guarantee that you person capable RAM. With nan accelerated gait astatine which open-source models are improving, you tin execute comparable capacity moreover pinch these alternatives.