Moritz Laurer
AI & ML interests
Recent Activity
Articles
Organizations
MoritzLaurer's activity
This will probably be the basis for many future SOTA encoders! And I can finally stop using DeBERTav3 from 2021 :D
Congrats @answerdotai , @LightOnIO and collaborators like @tomaarsen !
Paper and models here ๐https://huggingface.co./collections/answerdotai/modernbert-67627ad707a4acbf33c41deb
Hey
@borowis
, I don't think there is a plan to add embedding models to the NIM API. Embedding models are quite small which makes them easier to run on accessible hardware (vs. the H100 GPUs running the large LLMs on the NIM API). I'd recommend using a cheap GPU (or even a CPU) via the HF dedicated endpoints for deploying embedding models: https://huggingface.co./inference-endpoints/dedicated And you can use the autoscaling/scale-to-zero feature to avoid unnecessary costs
(The smaller BGE models from the MTEB leaderboard are always a good place to start)
huggingface/open-source-ai-year-in-review-2024
pip install prompt-templates
. Motivation: The community currently shares prompt templates in a wide variety of formats: in datasets, in model cards, as strings in .py files, as .txt/.yaml/.json/.jinja2 files etc. This makes sharing and working with prompt templates unnecessarily complicated.
Prompt templates are currently the main hyperparameter that people tune when building complex LLM systems or agents. If we don't have a common standard for sharing them, we cannot systematically test and improve our systems. After comparing different community approaches, I think that working with modular .yaml or .json files is the best approach.
The
prompt-templates
library : - proposes a standard for sharing prompts (entirely locally or on the HF hub)
- provides some utilities that are interoperable with the broader ecosystem
Try it:
# !pip install prompt-templates
from prompt_templates import PromptTemplateLoader
prompt_template = PromptTemplateLoader.from_hub(repo_id="MoritzLaurer/closed_system_prompts", filename="claude-3-5-artifacts-leak-210624.yaml")
The library is in early stages, feedback is welcome!
More details in the docs: https://github.com/MoritzLaurer/prompt_templates/
TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)
docs: https://huggingface.co./docs/hub/storage-limits
We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐ฅ
cc: @reach-vb @pierric @victor and the HF team
The reference names are always in this collection for the NIM API: https://huggingface.co./collections/nvidia/nim-serverless-inference-api-66a3c6fcdcb5bbc6e975b508
It works for me with this code actually. The small change I made is a minimal change in the model identifier (removing the "meta"). The issue is that Meta renamed their models on HF recently.
It works for me with this code. Can you try again with this code?
#!pip install "huggingface_hub>=0.25.0"
from huggingface_hub import InferenceClient
client = InferenceClient(
base_url="https://huggingface.co./api/integrations/dgx/v1",
api_key=os.getenv("HF_TOKEN_ENTERPRISE") # see docs: https://huggingface.co./blog/inference-dgx-cloud#create-a-fine-grained-token
)
output = client.chat.completions.create(
model="meta-llama/Llama-3.1-405B-Instruct-FP8", #"meta-llama/Meta-Llama-3.1-405B-Instruct-FP8",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
max_tokens=1024,
)
print(output)
ok, I could reproduce the issue and I get the same error, I'm reporting it to Nvidia, thanks for reporting
What error do you get for the 405b?
Hey @jmparejaz , did you use a token from an enterprise org as explained here https://huggingface.co./blog/inference-dgx-cloud#create-a-fine-grained-token ?
1,000 spots available first-come first serve with some surprises during the stream!
You can register and add to your calendar here: https://streamyard.com/watch/JS2jHsUP3NDM
During user research with colleagues @MoritzLaurer and @Jofthomas , we discovered that the class definition currently in used to define a Tool in
transformers.agents
is a bit tedious to use, because it goes in great detail.โก๏ธ So Iโve made an easier way to build tools: just make a function with type hints + a docstring, and add a @tool decorator in front.
โ ย Voilร , youโre good to go!
Read all about it in the new doc here: https://huggingface.co./docs/transformers/main/en/agents#create-a-new-tool
And donโt hesitate to give feedback, Iโm all ears! ๐ค
Here is the full thesis if you're interested: https://research.vu.nl/ws/portalfiles/portal/355675396/dissertationlaurerfinal+-+66c885c7e9d0b.pdf
Here is the collection of my most recent models: https://huggingface.co./collections/MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f
I first learned about instruction-based classifiers like BERT-NLI 3-4 years ago, through the @HuggingFace ZeroShotClassificationPipeline. Digging deeper into this, it was surprisingly easy to find new datasets, newer base models, and reusable fine-tuning scripts on the HF Hub to create my own zeroshot models - although I didn't know much about fine-tuning at the time.
Thanks to the community effect of the Hub, my models were downloaded hundreds of thousands of times after a few months. Seeing my research being useful for people motivated me to improve and upload newer models. Leaving my contact details in the model cards led to academic cooperation and consulting contracts (and eventually my job at HF).
That's the power of open science & open source: learning, sharing, improving, collaborating.
I mean every word in my thesis acknowledgments (screenshot). I'm very grateful to my supervisors @vanatteveldt @CasAndreu @KasperWelbers for their guidance; to @profAndreaRenda and @CEPS_thinktank for enabling me to work part-time during the first year; to @huggingface for creating awesome tools and an awesome platform; and to many others who are not active on social media.
Links to the full thesis and the collection of my most recent models are below.
PS: If someone happens to speak Latin, let me know if my diploma contains some hidden Illuminati code or something :D
#!pip install "huggingface_hub>=0.25.0"
from huggingface_hub import InferenceClient
client = InferenceClient(
base_url="https://huggingface.co./api/integrations/dgx/v1",
api_key="MY_FINEGRAINED_ENTERPRISE_ORG_TOKEN" # see docs: https://huggingface.co./blog/inference-dgx-cloud#create-a-fine-grained-token
)
output = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-405B-Instruct-FP8",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Count to 10"},
],
max_tokens=1024,
)
print(output)
- It's pay-as-you-go, so it doesn't have rate limits like the standard HF Serverless API and you don't need to commit to hardware like for a dedicated endpoint.
- It works out-of-the box with the new v0.25 release of our huggingface_hub.InferenceClient
- It's specifically tailored to a small collection of popular open-weight models. For a broader selection of open models, we recommend using the standard HF Serverless API.
- Note that you need a token from an Enterprise Hub organization to use it.
Details in this blog post: https://huggingface.co./blog/inference-dgx-cloud
Compatible models in this HF collection: nvidia/nim-serverless-inference-api-66a3c6fcdcb5bbc6e975b508
Release notes with many more features of huggingface_hub==0.25.0: https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0
Copy-pasteable code in the first comment:
1. Start testing an idea by prompting an LLM/VLM behind an API. It's fast and easy and I avoid wasting time on tuning a model on a task that might not make it into production anyways.
2. The LLM/VLM then needs to be manually validated. Anyone seriously considering putting AI into production has to do at least some manual validation. Setting up a good validation pipeline with a tool like Argilla is crucial and it can be reused for any future experiments. Note: you can use LLM-as-a-judge to automate some evals, but you always also need to validate the judge!
3. Based on this validation I can then (a) either just continue using the prompted LLM if it is accurate enough and it makes sense financially given my load; or (b) if the LLM is not accurate enough or too expensive to run in the long-run, I reuse the existing validation pipeline to annotate some additional data for fine-tuning a smaller model. This can be sped up by reusing & correcting synthetic data from the LLM (or just pure distillation).
Paper: https://arxiv.org/pdf/2409.06857
Argilla docs: https://docs.argilla.io/latest/
Argilla is also very easy to deploy with Hugging Face Spaces (or locally): https://huggingface.co./new-space?template=argilla%2Fargilla-template-space
Remember scaling laws? These are empirical laws that say "the bigger your model, the better it gets". More precisely, "as your compute increases exponentially, loss decreases in a linear fashion". They have wild implications, suggesting that spending 100x more training compute would make you super-LLMs. That's why companies are racing to build the biggest AI superclusters ever, and Meta bought 350k H100 GPUs, which probably cost in the order of $1B.
But think of this : we're building huge reasoning machines, but only ask them to do one pass through the model to get one token of the final answer : i.e., we expend a minimal effort on inference. That's like building a Caterpillar truck and making it run on a lawnmower's motor. ๐๐ต Couldn't we optimize this? ๐ค
๐ก So instead of scaling up on training by training even bigger models on many more trillions of tokens, Google researchers explored this under-explored avenue : scaling up inference compute.
They combine two methods to use more compute : either a reviser that iterated to adapt the model distribution, or generate N different completions (for instance through Beam Search) and select only the best one using an additional verifier model.
They use a Palm-2 model (released in May 23) on the MATH dataset : Palm-2 has the advantage of getting a low performance on MATH, but not zero, so that improvements will be noticeable.
And the results show that for the same fixed amount of inference compute:
๐ฅ a smaller model with more effort on decoding beats a x14 bigger model using naive greedy sampling.
That means that you can divide your training costs by 14 and still get the same perf for the same inference cost!
Take that, scaling laws. Mark Zuckerberg, you're welcome, hope I can get some of these H100s.
Read the paper here ๐ Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (2408.03314)
What feature or improvement would make the biggest impact on Hugging Face?
Whether it's the Hub, better documentation, new integrations, or something completely different โ we're all ears!
Your feedback shapes the future of Hugging Face. Drop your ideas in the comments below! ๐