What is the official DeepSeek V4 paper title?
The public report is titled DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence.
Where is the DeepSeek V4 PDF hosted?
The official PDF is hosted on the DeepSeek-V4-Pro Hugging Face repository, alongside the public model card and release notes.
Does the official preview mention two model families?
Yes. The official introduction presents DeepSeek-V4-Pro and DeepSeek-V4-Flash as the two main families in the preview release.
What is the fastest way to verify the 1M-token claim?
Open the official README or the technical report. The introduction explicitly states that both model families support one million tokens of context.
Is this page the official DeepSeek documentation?
No. This is an independent editorial landing page meant to reduce search friction and route readers back to the official sources.