The model released is called Latam-GPT, the first large-scale open-source language model developed specifically for Latin America and the Caribbean.
Its size is 70 billion parameters (70B).
It is a continuous pre-training (CPT) adaptation of Meta’s Llama 3.1 70B base model, trained on a regionally-curated corpus of ~300 billion tokens that emphasize Latin American languages, cultures, and contexts.
The model released is called Latam-GPT, the first large-scale open-source language model developed specifically for Latin America and the Caribbean. Its size is 70 billion parameters (70B). It is a continuous pre-training (CPT) adaptation of Meta’s Llama 3.1 70B base model, trained on a regionally-curated corpus of ~300 billion tokens that emphasize Latin American languages, cultures, and contexts.
Press release: https://cenia.cl/2026/02/10/latam-gpt-la-primera-ia-regional...
Presentation (in Spanish): https://www.youtube.com/watch?v=FdLzAiQizhA
HF: https://huggingface.co/latam-gpt
Github: https://github.com/latam-gpt