12 Days of OpenAI – Day 1
OpenAI has officially released O1, their advanced reasoning model, featuring improved performance in coding, math, and writing tasks, along with new image analysis capabilities.
The model shows 34% fewer major errors on complex questions compared to O1-preview, while delivering faster response times. It’s now available to Plus and Team users, with Enterprise and Edu access coming next week.
ChatGPT Pro, a new $200 monthly subscription, has been introduced offering:
- Unlimited access to O1 and other models
- O1 Pro Mode for enhanced reasoning
- Access to O1-mini, GPT-4O, and Advanced Voice
- Future compute-intensive features
O1 Pro Mode demonstrates superior performance on ML benchmarks, particularly excelling in data science, programming, and case law analysis with a strict 4/4 reliability standard.
12 Days of OpenAI – Day 2
OpenAI has unveiled Reinforcement Fine-Tuning, a new model customization technique that allows organizations to create specialized AI models for complex domain-specific tasks in fields like coding, scientific research, and finance.
The company is expanding alpha access through their Reinforcement Fine-Tuning Research Program, targeting research institutes, universities, and enterprises. The program enables developers to customize models using high-quality tasks and reference answers to improve accuracy in specific domains.
Early success has been observed in Law, Insurance, Healthcare, Finance, and Engineering sectors, particularly for tasks with objectively correct answers. The technology is expected to be publicly available in early 2025, with limited spots currently open for the research program.
Apply here if you interest: https://openai.com/form/rft-research-program/
12 Days of OpenAI – Day 3
OpenAI has officially released Sora, their highly anticipated text-to-video generation model, now available on sora.com for ChatGPT Plus and Pro subscribers.
The release includes Sora Turbo, a significantly faster version compared to the February preview, offering:
- Video generation up to 1080p and 20 seconds
- Multiple aspect ratio options
- Ability to extend, remix, and blend existing content
- New storyboard interface for precise frame control
ChatGPT Plus users can generate up to 50 videos at 480p monthly, while Pro subscribers get 10x more usage, higher resolutions, and longer durations.
The platform features built-in safety measures including C2PA metadata, visible watermarks, and content restrictions. OpenAI emphasizes they’re introducing this technology early to allow society time to develop appropriate norms and safeguards for responsible use.
12 Days of OpenAI – Day 4
OpenAI has released Canvas, a new collaborative workspace in ChatGPT that enhances writing and coding capabilities beyond simple chat interactions. The feature is now available to all users through web and Windows desktop app.
Key features include:
- Side-by-side editing with ChatGPT
- Direct Python code execution with real-time debugging
- Built-in commenting and feedback system
- Integration with custom GPTs
- Support for data visualization and graphics
- One-click formatting and enhancement tools
Canvas automatically activates when users input substantial text or code, offering a dedicated space for document editing, code debugging, and collaborative refinement. The tool is now included with the 4O model and works seamlessly with both free and paid plans.
12 Days of OpenAI – Day 5
Apple and OpenAI have partnered to bring native ChatGPT integration across Apple’s ecosystem, offering seamless access through Siri, writing tools, and visual intelligence features.
The integration includes:
- Direct Siri handoff to ChatGPT for complex queries
- System-wide writing assistance and document composition
- Visual intelligence through iPhone camera
- Document analysis and summarization in macOS
- Anonymous usage option or enhanced features with ChatGPT account
The feature is available through Apple Intelligence settings on iOS 15.2 and macOS 15.2 Sequoia, allowing users to enable ChatGPT extensions for a frictionless AI assistance experience across all Apple devices.
12 Days of OpenAI – Day 6
OpenAI has expanded Advanced Voice with new video chat and screen sharing capabilities, allowing users to engage in real-time visual conversations with ChatGPT across more than 50 languages.
The update includes:
- Live video conversations with ChatGPT
- Screen sharing for collaborative assistance
- Real-time visual context understanding
- Interactive troubleshooting and learning support
- Integration with existing voice features
As a holiday special, OpenAI introduced “Santa Mode” accessible through a snowflake icon, letting users chat with Santa using Advanced Voice throughout December. All users will receive a one-time reset of their Advanced Voice usage limits for Santa conversations.
The features are rolling out now to Teams users and most Plus/Pro subscribers (European rollout pending) through mobile apps, desktop apps, and chat.openai.com.
12 Days of OpenAI – Day 7
OpenAI has introduced Projects, a new organizational feature that allows users to create dedicated workspaces with shared context, files, and custom instructions across conversations.
The system offers:
- Project-specific folders for related conversations
- File and document management
- Custom AI instructions per project
- Integration with existing features like Canvas, DALL-E, and web search
- Full GPT-4O model access
The feature is rolling out first to Plus, Pro, and Teams users via web and Windows app, with Enterprise and Education access coming in January. Mobile and Mac users can currently view and interact with existing projects.
While following Anthropic’s similar feature from June, Projects addresses a key user need by eliminating the need to repeatedly establish context and instructions across new conversations.
12 Days of OpenAI – Day 8
OpenAI has announced that ChatGPT’s web search feature, previously limited to premium subscribers, is now available to all logged-in users with improved speed and functionality.
The update includes:
- Quick access through a new globe icon
- Voice search in Advanced Voice Mode for premium users
- Enhanced mobile experience with visual layouts
- Integration with Google and Apple Maps
- Option to set ChatGPT Search as default search engine
- Links displayed before ChatGPT responses
The feature expansion marks a significant step toward more capable AI assistants, particularly with voice integration turning ChatGPT into a more powerful alternative to traditional voice assistants.
OpenAI also announced an upcoming ‘mini Dev Day’ for tomorrow, suggesting more developer-focused updates are on the way.
12 Days of OpenAI – Day 9
OpenAI unveiled significant developer-focused enhancements during Day 9 of its livestream, introducing API access to the o1 reasoning model alongside improvements to its Realtime API and developer toolkit.
Key announcements:
- o1 API debuts with expanded features including function calls, structured outputs, vision capabilities, and adjustable reasoning parameters
- Pricing set at $15 per ~750k words for analysis and $60 per ~750k words for generation, approximately 3-4x GPT-4o rates
- Realtime API sees 60% cost reduction for GPT-4o audio, introduces budget-friendly 4o mini option, and adds WebRTC support
- New Preference Fine-Tuning system enables model customization through comparative examples
- Beta SDKs now available for Go and Java developers
Impact: The release transforms AI development capabilities, equipping builders with enhanced tools for creating sophisticated applications through o1 access and advanced features.
12 Days of OpenAI – Day 10
OpenAI introduced two new ways to interact with ChatGPT during Day 10 of its livestream – a traditional 1-800 number service and WhatsApp integration for global accessibility.
Key features:
- US customers get 15 minutes of free monthly calls through 1-800-CHATGPT, accessible from any phone device
- Service works with all phones including vintage models, removing tech barriers
- WhatsApp integration enables international users to chat with a lighter ChatGPT version
- WhatsApp service includes daily usage limits with planned features like image analysis
Impact: OpenAI’s expansion into traditional communication channels brings AI assistance to a broader audience, making the technology accessible to users regardless of their technical expertise or device preferences.
12 Days of OpenAI – Day 11
OpenAI revealed new features for ChatGPT’s Mac and Windows desktop apps, focused on enhanced app integration, new models, and expanded voice capabilities.
Key features:
- App Integration: ChatGPT can now access and analyze data from other running apps on your desktop with user permission, enabling new context-aware features. Initial supported apps include Warp, Xcode, Apple Notes, Notion, and Quip.
- O1 Model Launch: A new “O1” model was introduced, boasting faster response times for coding tasks. This model is available for both free and “Pro” users.
- Advanced Voice Mode: Users can interact with ChatGPT via voice commands, with Santa Claus making a special guest appearance to demonstrate this new capability.
- Native Mac App Enhancements: The native Mac app remains lightweight and efficient, and includes a keyboard shortcut (Option+Space) for quick access to ChatGPT.
Impact: These updates expand the utility of ChatGPT by streamlining interactions with other applications and enabling new modalities for user input. OpenAI continues its commitment to desktop apps as a key part of ChatGPT’s future development, promising even more advanced features in 2025.
12 Days of OpenAI – Day 12
OpenAI announced two new reasoning models, o3 and its smaller counterpart o3 mini, marking a significant leap in AI capabilities. While not publicly launched, both models are now available for public safety testing, allowing researchers to identify potential risks before wider release.
Key Features:
- Advanced Reasoning: Both o3 and o3 mini demonstrate state-of-the-art performance on challenging benchmarks in coding and mathematics, significantly outperforming previous OpenAI models. o3 mini is highlighted for its cost-effectiveness, offering comparable performance to larger models at a fraction of the cost.
- Public Safety Testing: OpenAI is opening access to both models for safety testing by researchers, focusing on identifying potential risks and improving safety protocols. Applications are now open.
- Adaptive Reasoning Effort: o3 mini introduces adjustable “reasoning effort” levels (low, medium, high), allowing users to fine-tune the model’s response time and performance based on task complexity.
- Deliberative Alignment: A new safety technique, “deliberative alignment,” is utilized, enhancing the models’ ability to identify and reject unsafe prompts.
Impact: The launch of o3 and o3 mini represents OpenAI’s continuing push towards more capable and safer AI models. The public safety testing initiative represents a significant step in responsible AI development, leveraging external expertise to assess and mitigate potential harms. OpenAI plans a full launch of o3 mini in late January 2025, with o3 following shortly thereafter.
What announcement from OpenAI’s 12 Days series excites you the most? How do you see these innovations shaping the future of AI interaction? Are you planning to try any of these new features? Share your thoughts and experiences in the comments below, and don’t forget to stay tuned for more updates as these technologies roll out!