sudocode_ai's profile picture. We use AI to help you create software better, faster.
🐦‍⬛

sudocode

@sudocode_ai

We use AI to help you create software better, faster. 🐦‍⬛

Love seeing the crazy ways we're trying to extract new utilities from LLMs. For those who are less familiar, relation extraction means defining the relationship between any two entities in a text. Manually doing this is super expensive, but with LLMs we can do this instantly!

The `rebel-large` model is awesome for relation extraction 🔗 Paired with CUDA, it’s blazing fast ⚡️. With @llama_index 🦙, we can now build a knowledge graph over any text data super quickly! 🕸️ Full Colab notebook showing how you can use it: colab.research.google.com/drive/1G6pcR0p…



We at sudocode are excited for all the new LLMs being released! What kinds of things do you plan on building?

Meta releases Llama 2: Open Foundation and Fine-Tuned Chat Models paper: ai.meta.com/research/publi… blog: ai.meta.com/llama/ develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion…

_akhaliq's tweet image. Meta releases Llama 2: Open Foundation and Fine-Tuned Chat Models

paper: ai.meta.com/research/publi…
blog: ai.meta.com/llama/

develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion…


This goes along with our intuition that the ever-growing context windows aren't a one size fits all solution. sudocode uses specialized agents that compress context in order to achieve better results. What are some tactics y'all are trying?

This post is unavailable.

Interested in formatting your OpenAI outputs? One common issue people have around code outputs is unwanted markdown. We've found that few-shot examples can help! For example, add this to your prompt: Bad Response: ``` print("hello_world") ``` Good: print("hello_world")


Shoutout to @posthog for one of the easiest setups ever! Took us 5 minutes and now we have session replays, insights, and more all for free. Highly recommend new companies to instrument analytics.

sudocode_ai's tweet image. Shoutout to @posthog for one of the easiest setups ever! Took us 5 minutes and now we have session replays, insights, and more all for free. Highly recommend new companies to instrument analytics.

One nifty tip for making the most use of your rate limits: If your application depends on openai's gpt-4, gpt-3.5-turbo, one way to get more bang for your buck is to round-robin rotate through each model. Each model (including 32k, and snapshots), has a different rate limit!


United States Trends

Loading...

Something went wrong.


Something went wrong.