Abstract
As the world go crazy about AI (read large/medium/small Language Models please). We decided to run some test scenarios. For now Hacking archives of India is experimenting with following
- Extracting Youtube Transcript (fabric’s yt script)
- Passing it down to ollama for summarization (fabric’s summarize script / prompt)
- Ollama is using llama3:70b model for the summarization
- posting the summary in the bottom of each page
Caution
This is fully automated process as of now with no manual inputs. Which means the accuracy of the content is not guaranteed.
Credit where credit is due
- https://github.com/danielmiessler/fabric/ : for doing a lot of legwork in terms of good prompt
- https://github.com/ollama/ollama : Ollama Allows Local Language models to be run at local hardware