I tried Ollama’s local model. It’s cool to run an LLM in my terminal!

https://github.com/ollama/ollama

took very long to load everything - can’t help thinking about how many storage it’s taking

took very long to load everything - can’t help thinking about how many storage it’s taking

image.png

Finally loaded! I then asked Ollama a brief question to test out its difference from ChatGPT as I’ve never used it. Surprisingly it didn’t take anytime to load / think when responding to the prompt. After reading the “reflection” it generates, the content really echos with Allison’s idea of “ransom notes” - It's three paragraphs of text corpses that excerpt, analyze, and collage my questions, but not any thoughtful, pulsating expression of ideas.

image.png

Due to the limitation of the content, my another guess would be it’s unable to access the link and read the content to it.

image.png

Very true. In it’s text analysis of the report there are many aspects didn’t show in the report at all, so it’s again some text corpses based on its previous answers and the title of the article.

I also asked Ollama if they can generate images, and I got this:

image.png

Very cute! Then I asked:

image.png

Not that good. I asked Ollama to draw a lizard again, but

image.png

It can’t draw a Lizard :(

image.png

Not even an Ollama.

Overall I think compared to ChatGPT/Claudia, Ollama can only run basic text analysis and lacks many key feature, such as importing files, text-to-image generation, external website accessing. But the ASCII art is fun, feels like it’s an “okay let’s at least leave some small cute stuff here” from developers.