Grok-1 Model Card

Model detailsGrok-1 is an autoregressive Transformer-based model pre-trained to perform next-token prediction. The model was then fine-tuned using extensive feedback from both humans and the early Grok-0 models. The initial Grok-1 has a context length of 8,192 tokens and is released in Nov 2023.
Intended usesGrok-1 is intended to be used as the engine behind Grok for natural language processing tasks including question answering, information retrieval, creative writing and coding assistance.
LimitationsWhile Grok-1 excels in information processing, it is crucial to have humans review Grok-1's work to ensure accuracy. The Grok-1 language model does not have the capability to search the web independently. Search tools and databases enhance the capabilities and factualness of the model when deployed in Grok. The model can still hallucinate, despite the access to external information sources.
Training dataThe training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by ourAI Tutors.
EvaluationGrok-1 was evaluated on a range of reasoning benchmark tasks and on curated foreign mathematics examination questions. We have engaged with early alpha testers to evaluate a version of Grok-1 including adversarial testing. We are in the process of expanding our early adopters to close beta via Grok early access.