Search
Results
CSS resets
A curated list of awesome Go frameworks, libraries and software - Awesome Go / Golang
tk9_0 package - modernc.org/tk9.0 - Go Packages
Void - opensource code copilot
2M users but no money in the bank. Tough times 😔
Tips for building Bubble Tea programs
svg online editor
Who needs GitHub Copilot when you can roll your own AI code assistant at home • The Register
Zed - Code at the speed of thought
Writing generic collection types in Go: the missing documentation | DoltHub Blog
Problems - LeetCode
caddy webserver to return 404 on file not found
How LLMs Work, Explained Without Math - miguelgrinberg.com
Online compiler for C, C++, Java, Python, React, Node.js and more
GitHub - quasilyte/roboden-game: An indirect control real-time strategy game about robot colonies
Difftastic, a structural diff - compare code with sysntax aware
210,000 CODERS lost jobs as NVIDIA released NEW coding language. - YouTube
Visual Guide to Slices in Go — Ozan Sazak
microsoft/Security-101: 7 Lessons, Kick-start Your Cybersecurity Learning.
Home - Replit
Replit is a site where I can run any REPL online. Can be used for AI
(2) Are there any tiny (1-3b) models finetuned for coding available in GGUF format? : LocalLLaMA
bigcode (BigCode)
Research community developing various code models, small and big. Models may not be instruct
WizardLM (WizardLM)
deepseek-ai (DeepSeek)
They have the 1.3B version!!! This may be the best to start with Newspeak. Should work train even on huggingcface
DeepSeek
deepseek-ai/deepseek-coder-6.7b-instruct · Hugging Face
Another possible model. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
LLaMA 7B GPU Memory Requirement - Transformers - Hugging Face Forums
With the optimizers of bitsandbytes (like 8 bit AdamW), you would need 2 bytes per parameter, or 14 GB of GPU memory.
stabilityai/stable-code-3b · Hugging Face
Another potential model to use for Newspeak, but it is NOT open source. Adventage: 2.5B params, so should be usable in small GPUs
Large Language Models for Domain-Specific Language Generation: How to Train Your Dragon | by Andreas Mülder | Medium
training a model like Llama with 2.7 billion parameters outperformed a larger model like Vicuna with 13 billion parameters. Especially when considering resource consumption, this might be a good alternative to using a 7B Foundation model instead of a full-blown ChatGPT. The best price-to-performance base model for our use case turned out to be Mistral 7b. The model is compact enough to fit into an affordable GPU with 24GB VRAM and outperforms the other models with 7B parameters.
Can Ai Code Results - a Hugging Face Space by mike-ravkine
Comparison of LLM models for coding
openchat/openchat-3.5-0106 · Hugging Face
Open source with lots of information. Uses Multiple undrelying models. Not sure how I would train for it
Welcome Mixtral - a SOTA Mixture of Experts on Hugging Face
The Mixtral model is new, and seems to be good. Click on “Demo“ to test it
StarCoder: A State-of-the-Art LLM for Code
Article has comparison with other code-LLM models
huybery/Awesome-Code-LLM: An awesome and curated list of best code-LLM for research.
Large language models and the rise of the AI code generators | InfoWorld
Review of LLM specialized for code generation
OpenAI Codex - Wikipedia
Model which generates code for Python, Javascript, Go, Shell, Perl, Swifg, Ruby, PHP
codellama (Code Llama) - Huggingface model for generating programs. Maybe can be used for Newspeak?
Introducing Gemini: Google’s most capable AI model yet
Advanced coding Our first version of Gemini can understand, explain and generate high-quality code in the world’s most popular programming languages, like Python, Java, C++, and Go. Using a specialized version of Gemini, we created a more advanced code generation system, AlphaCode 2,
AI Code Tools: The Ultimate Guide in 2024
AI Code tools : Good summary. Does not talk about which pre-trained model they use. One is gemini (bard) -> alphacode2
Fine-tune a pretrained model
Use the Bert model to train on Yelp dataset
PuerkitoBio/goquery: A little like that j-thing, only in Go.
dunglas/frankenphp: The modern PHP app server
mmcdole/gofeed: Parse RSS, Atom and JSON feeds in Go
Exploring Go 1.22: Effective HTTP Routing Strategies — Talkative Developer
Ten great Godot games + source code from Game Off 2023 - DEV Community
BigCode - Playground - a Hugging Face Space by bigcode
Look for models that could be used in Newspeak
bresenham algorithms
Making Games in Go for Absolute Beginners
Writing WebAssembly By Hand
Writing WebAssembly Code by Hand
wat=wasm text format; sat=server to run wat; subo=project that oversees sat