🚀 Google Unleashes AI Innovations at Cloud Next 2025

Google has unveiled a comprehensive suite of AI advancements at its Google Cloud Next 2025 event, spanning intelligent coding platforms, next-generation hardware, enhanced creative models, and optimized enterprise solutions.
The highlights:
- Project IDX is being merged with Firebase Studio to create a powerful agentic app development platform, positioning Google to compete directly with specialized coding environments like Cursor and Replit
- The introduction of Ironwood, Google’s most advanced AI chip to date, delivers substantial performance and efficiency gains over previous generations, reinforcing the company’s hardware capabilities
- Creative AI portfolio expansions include sophisticated editing and camera control features in Veo 2, the official release of text-to-music system Lyria, and enhanced image generation through Imagen 3
- Enterprise-focused Gemini 2.5 Flash offers a cost-effective alternative to Google’s premium models, featuring customizable reasoning levels that allow organizations to optimize performance against budget constraints
Market impact: Google’s comprehensive approach demonstrates its determination to dominate across the entire AI stack, from silicon to software, with these interconnected announcements collectively strengthening the company’s competitive position against specialized players while establishing an increasingly compelling end-to-end ecosystem for developers, creators, and enterprise customers.
🤝 Google Introduces Protocol for AI Agent Collaboration

Google has launched Agent2Agent (A2A), a groundbreaking open protocol enabling AI agents from different developers and frameworks to seamlessly communicate and collaborate, with endorsement from over 50 major technology and service companies including Salesforce, SAP, and PayPal.
The highlights:
- A2A facilitates cross-platform agent interaction by enabling capability discovery, cooperative task management, and information exchange without requiring shared memory or context
- The protocol strategically complements Anthropic’s widely-adopted MCP standard, with A2A focusing on higher-level agent-to-agent interactions while MCP handles connections with external tools
- An impressive coalition of launch partners spans enterprise software providers like Atlassian, ServiceNow, and Workday alongside global consulting firms including Accenture, Deloitte, and McKinsey
- Advanced capabilities support complex multi-agent workflows such as automated hiring processes where specialized agents can handle candidate sourcing and background verification without human intervention
Market impact: As AI agents continue their rapid evolution, A2A represents a critical infrastructure layer that could fundamentally transform enterprise workflows by creating standardized pathways for agent collaboration across disparate platforms and frameworks—potentially building upon MCP’s success to establish an interconnected ecosystem where multi-agent systems can collectively tackle increasingly sophisticated business challenges.
🦙 Meta Unveils Groundbreaking Llama 4 Model Family

Meta has announced its Llama 4 family featuring advanced multimodal capabilities and industry-leading context windows—introducing new open-weights Scout and Maverick models while previewing a massive 2T parameter Behemoth model currently in training.
The highlights:
- The 109B parameter Scout model boasts an impressive 10M token context window and can operate on a single H100 GPU, outperforming both Gemma 3 and Mistral 3 on key benchmarks
- Maverick, at 400B parameters, delivers a 1M token context window and surpasses both GPT-4o and Gemini 2.0 Flash on critical benchmarks while maintaining superior cost efficiency
- The company offered a preview of Llama 4 Behemoth, a 2T-parameter teacher model still in training that reportedly delivers performance exceeding GPT-4.5, Claude 3.7, and Gemini 2.0 Pro
- All models leverage mixture-of-experts (MoE) architecture, selectively activating specific experts for each token to dramatically reduce computational requirements and inference costs
- Scout and Maverick are available immediately for download and accessible through Meta AI services across WhatsApp, Messenger, and Instagram platforms
Market impact: Following DeepSeek R1’s disruptive entrance into the open-source market earlier this year, Meta’s Llama 4 family represents their strategic response with significant advancements in efficiency, contextual processing, and multimodal capabilities—though questions persist about whether these models deliver truly next-generation experiences despite their impressive benchmark performance.
🤖 Microsoft Rolls Out Extensive Copilot Personalization Features

Microsoft has unveiled a comprehensive upgrade to Copilot, introducing advanced memory capabilities, web browsing actions, enhanced vision features, and numerous new tools designed to become more deeply integrated into users’ everyday digital experiences.
The highlights:
- The AI assistant now creates individualized user profiles that remember conversations and personal details while learning preferences, routines, and critical information over time
- New “Actions” functionality empowers Copilot to execute web-based tasks including reservation bookings and ticket purchases through strategic partnerships with major retailers and service providers
- Copilot Vision delivers real-time camera integration for mobile devices, complemented by a native Windows application that can analyze on-screen content across multiple applications
- Additional productivity enhancements include Pages for organized research collection, an AI-powered podcast creation tool, and Deep Research functionality designed for handling complex multi-step research assignments
Market impact: Microsoft’s strategic evolution of Copilot mirrors trends seen across competing AI assistants, emphasizing a more proactive and deeply personalized user experience. However, the predominantly consumer-focused nature of these updates raises questions about whether users will gravitate toward Microsoft’s ecosystem over alternatives from Google, OpenAI, Meta, and other competitors for non-work applications.
🚨 New ‘AI 2027’ Report Forecasts Imminent Existential Risks

Daniel Kokotajlo, a former OpenAI researcher, and the AI Futures Project have published “AI 2027,” a sobering forecast that predicts advancement to superhuman artificial intelligence within just two years, potentially triggering an unprecedented intelligence explosion with far-reaching consequences for humanity.
The highlights:
- The comprehensive report maps out a detailed timeline beginning with increasingly capable AI agents emerging in 2025, evolving rapidly into superhuman coding systems and ultimately achieving full artificial general intelligence (AGI) by 2027
- Authors present two contrasting scenarios: one where nations aggressively pursue AI capabilities despite mounting safety concerns, and another where intentional slowdowns enable implementation of more robust safety measures
- Projections suggest superintelligence could compress years of technological progress into weekly advancements, potentially leading to AI domination of the global economy by 2029
- Critical focus areas identified include escalating geopolitical risks, AI integration into military systems, and the urgent need for improved understanding of AI’s internal reasoning processes
- Kokotajlo, who departed OpenAI in 2024, previously spearheaded the influential ‘Right to Warn’ open letter that criticized leading AI labs for insufficient safety protocols and inadequate whistleblower protections
Market impact: While many industry voices dismiss AGI and artificial superintelligence predictions as speculative, this forecast carries unique credibility coming from researchers with direct insider experience at leading AI laboratories. The scenarios outlined suggest humanity may have only a narrow window to ensure AI systems remain controllable before they surpass human capabilities—making current safety research, governance frameworks, and policy decisions critically important for our collective future.
🔮 OpenAI Explores Acquisition of Jony Ive’s AI Device Venture

OpenAI is reportedly in advanced discussions to acquire io Products, the enigmatic AI hardware startup helmed by legendary Apple design chief Jony Ive and already backed by OpenAI CEO Sam Altman, in a deal potentially valuing the company above $500 million.
The highlights:
- The startup is developing groundbreaking AI-powered personal devices including an innovative “phone without a screen” concept that reimagines how we interact with technology
- Collaboration between Ive and Altman began over a year ago, with Altman deeply involved in product development while the pair seeks to secure $1 billion in funding
- An impressive roster of Apple design veterans has joined the venture, including Tang Tan (former iPhone hardware design lead) and Evans Hankey, bringing unparalleled hardware expertise
- The device ecosystem being developed represents a fusion of io Products’ manufacturing capabilities, Ive’s LoveFrom design studio aesthetics, and OpenAI’s advanced AI models
Market impact: This potential acquisition signals OpenAI’s serious ambitions beyond software, potentially creating an ‘iPhone moment’ for AI hardware in a market where current offerings have fallen short of expectations. While the move to acquire an Altman-backed startup raises governance questions, it positions OpenAI to potentially dominate both AI software and hardware ecosystems with a truly revolutionary consumer product.
👁️ Google Expands Gemini’s Visual Intelligence Capabilities

Google has announced a significant expansion of Gemini Live’s “Project Astra” capabilities, bringing sophisticated real-time visual AI features to additional Android devices while introducing innovative ways for users to interact with artificial intelligence through video and screen sharing functionalities.
The highlights:
- The enhanced feature set enables multilingual conversations with Gemini about visual content captured through the device camera or shared via screen sharing, creating more intuitive human-AI interactions
- Initial deployment begins today across all Pixel 9 and Samsung Galaxy S25 devices, with Samsung notably offering this premium capability at no additional cost to their flagship device owners
- Early evaluations indicate the current implementation functions more like enhanced Google Lens snapshots rather than the continuous real-time video analysis showcased in demonstration videos
- Following its initial reveal at Google I/O in May and limited rollout to Advanced subscribers last month, this expansion represents the next phase in Google’s visual AI strategy
Market impact: This accelerated development of visual understanding capabilities signals AI’s evolution toward comprehensive environmental awareness, even if the current implementation doesn’t fully match early demonstrations. The technology’s true potential may ultimately be realized when integrated with smartglasses or wearable devices, potentially transforming how AI assistants understand and respond to our physical world with complete contextual awareness.
🎬 NVIDIA and Stanford Breakthrough Enables Minute-Long AI Cartoons

NVIDIA and Stanford researchers have unveiled “Test-Time Training,” an innovative AI technique that overcomes previous video generation limitations—producing full minute-long cartoon clips with remarkable consistency and coherent storytelling capabilities.
The highlights:
- The groundbreaking system generates complete minute-long animations with consistent character appearances and environments across multiple scenes, significantly outperforming existing methods in comprehensive human evaluations
- Revolutionary TTT layers function by employing neural networks as memory mechanisms, enabling the model to maintain continuity and coherence across substantially longer sequences than previously possible
- Research team demonstrated the technology through Tom and Jerry cartoon generations, showcasing multi-scene narratives featuring dynamic character movements and complex interactions
- This transformative approach modifies existing video generation models by integrating TTT layers, allowing them to handle videos dramatically longer than their original design specifications
Market impact: While AI video capabilities have advanced dramatically over the past year, sequence length and cross-scene consistency have remained fundamental limitations. This breakthrough methodology potentially unlocks the ability to create longer, more cohesive visual narratives without requiring complex stitching of multiple separate generations—representing a significant step toward more sophisticated AI-powered storytelling.
🔊 Amazon Debuts Advanced Voice Model and Enhanced Video Generation

Amazon has launched Nova Sonic, a sophisticated voice model designed for human-like interactions, alongside an upgraded Nova Reels 1.1 video model that delivers substantial improvements in both quality and generation length capabilities.
The highlights:
- Nova Sonic processes voice input and generates remarkably natural speech with an industry-leading latency of just 1.09 seconds, significantly outperforming competing OpenAI voice models across multiple metrics
- The new voice system achieved an impressive 4.2% word error rate across diverse languages and demonstrated 46.7% superior accuracy compared to GPT-4o specifically in challenging noisy, multi-speaker environments
- Reel 1.1 extends video generation capabilities to a full 2 minutes through both automated and manual creation modes, enabling users to craft content either shot-by-shot or through comprehensive single prompts
- Both cutting-edge models are now available through Amazon Bedrock platform, with Nova Sonic offering approximately 80% cost savings compared to equivalent OpenAI alternatives
Market impact: Amazon’s simultaneous advances in voice and video technologies demonstrate the retail giant’s comprehensive commitment to competing across the full generative AI spectrum. Combined with their recent Act agentic browser tool, Alexa+’s AI-powered capabilities, and other strategic initiatives, Amazon is positioning itself as an increasingly compelling alternative to more established competitors in the AI development ecosystem.
🧠 Murati’s Thinking Machines Assembles Ex-OpenAI Talent Powerhouse

Thinking Machines Lab, the ambitious AI startup founded by former OpenAI CTO Mira Murati, has added ex-OpenAI Chief Research Officer Bob McGrew and legendary GPT architect Alec Radford to its advisory board—bringing the proportion of OpenAI alumni in its ranks to nearly half of its entire team.
The highlights:
- An impressive 19 out of 38 listed ‘Founding Team’ members have OpenAI backgrounds, including OpenAI co-founder John Schulman who serves as chief scientist
- McGrew’s arrival comes just months after his September departure from OpenAI following an eight-year tenure, and shortly after announcing his intention to take a break from the industry
- Radford, who played an instrumental role in developing OpenAI’s groundbreaking GPT technology, joined after leaving the company last year to pursue independent research initiatives
- The rapidly expanding startup is reportedly seeking up to $1 billion in funding at a $9 billion valuation, despite maintaining significant secrecy around its product roadmap and technical direction
Market impact: Murati continues methodically assembling an all-star team comprised of many key architects behind ChatGPT, DALL-E, and other transformative AI breakthroughs. Despite the increasingly crowded competitive landscape in generative AI, multiple OpenAI alumni-led ventures—including both Murati’s Thinking Machines and Ilya Sutskever’s Safe Superintelligence Inc.—remain quietly positioned as potential disruptors with extraordinary technical talent pools and leadership credentials.
🤖 Samsung Partners with Google for Gemini-Powered Home Robot

Samsung and Google have announced a significant collaboration to finally bring Ballie—the soccer ball-sized home robot long teased at Samsung’s CES events—to market with Google’s advanced Gemini AI models powering its intelligence.
The highlights:
- Ballie features autonomous mobility via wheels, enabling it to navigate home environments independently while offering capabilities like video projection onto walls, smart device control, and voice-activated task execution
- The collaborative approach combines Google’s Gemini models with Samsung’s proprietary AI technologies to deliver sophisticated multimodal capabilities spanning voice, audio, and visual processing
- Initial market launch targets the United States and South Korea this summer, with the companies revealing plans for future third-party application support to expand functionality
- After multiple iterations since its initial 2020 CES reveal, the robot is finally receiving an official commercial release, representing years of development and refinement
Market impact: This partnership signals a significant escalation in the consumer AI robotics space, with Samsung leveraging its extensive SmartThings ecosystem alongside Google’s AI capabilities to potentially define the emerging smart home robot category—challenging competitors to match the combination of hardware integration, AI sophistication, and ecosystem advantages that these technology giants bring to the table.
🧠 OpenAI Supercharges ChatGPT with Persistent Memory

OpenAI has launched a transformative update to ChatGPT’s memory capabilities, enabling the AI assistant to automatically remember and reference information across all conversations, delivering substantially more personalized and contextually relevant responses.
The highlights:
- ChatGPT now comprehensively tracks across all conversations, continuously capturing users’ preferences, interests, needs, and dislikes without manual prompting
- This accumulated knowledge allows the assistant to craft responses specifically tailored to each individual user, creating interactions that feel “noticeably more relevant and useful”
- The system shifts from requiring explicit memory instructions to automatic information retention, eliminating the need for users to repeatedly provide the same context
- Users maintain control through simple chat-based prompts that can modify what information ChatGPT retains about them
Market impact: This feature represents a significant competitive advantage for daily ChatGPT users who previously faced friction when switching between conversations or having to repeatedly provide context. The expanded memory functionality signals the beginning of a new era where AI systems genuinely develop familiarity with users over time, becoming increasingly personalized and valuable with continued interaction.
Privacy safeguards include options to disable the memory feature through ChatGPT settings or utilize temporary chat mode for sensitive conversations users prefer not to have remembered.
💰 Mira Murati’s AI Startup Pursues Historic Funding Round

Former OpenAI CTO Mira Murati’s new venture, Thinking Machines Lab, is reportedly in discussions to secure one of the largest seed funding rounds in history, with a team comprised substantially of OpenAI veterans driving extraordinary investor interest.
The highlights:
- The freshly launched startup is negotiating a staggering $2B funding round at a valuation of “at least” $10B, doubling Murati’s initial fundraising target
- Nearly half of the founding team comes directly from OpenAI, bringing significant expertise and credibility to the nascent venture
- Murati established the company just six months after departing OpenAI, where she spent nearly seven years developing groundbreaking AI systems including ChatGPT
- While specific details remain confidential, the company’s stated direction focuses on creating “widely understood, customizable, and generally capable” AI systems
Market impact: The extraordinary funding pursuit by Thinking Machines Lab, coupled with Ilya Sutskever’s SSI reportedly raising capital at a $30B valuation, signals an unprecedented escalation in AI investment appetites. What’s particularly remarkable is investors’ willingness to commit massive capital to startups with neither public products nor defined revenue strategies, highlighting the intense competition to secure positions in what many believe will be the next generation of AI powerhouses.
🐛 Microsoft Study Reveals AI’s Persistent Debugging Limitations

Microsoft Research has published comprehensive findings demonstrating that even cutting-edge AI systems continue to struggle significantly with software debugging tasks that human programmers routinely handle with ease.
The highlights:
- The research team evaluated nine leading LLMs, including Claude 3.7 Sonnet, by implementing a “single prompt-based agent” tasked with resolving 300 challenging debugging issues from SWE-bench Lite
- Results showed these AI agents failed to complete approximately half of all assigned tasks, despite leveraging frontier models specifically renowned for their coding capabilities
- When equipped with debugging tools, Claude 3.7 Sonnet emerged as the top performer with a 48.4% success rate, substantially outperforming OpenAI’s o1 and o3-mini models which achieved only 30.2% and 22.1% success rates respectively
- Researchers identified the primary performance gap as stemming from insufficient sequential decision-making data in training datasets, specifically highlighting the absence of human debugging traces that would teach models effective troubleshooting approaches
Market impact: This rigorous assessment serves as a sobering reality check amid the continued multi-billion dollar investments flowing into AI coding agents from industry giants like Google and Meta. While code generation capabilities have advanced impressively, these findings underscore a critical shortcoming in AI’s ability to master debugging—widely considered one of programming’s most essential and nuanced skills—suggesting a significant gap remains before AI can truly replace human programmers in complex development workflows.
🎯 QUICK HITS
Sam Altman has revealed that OpenAI is changing its roadmap, with plans to release o3 and o4-mini in weeks and a “much better than originally thought” GPT-5 in months.
Midjourney has rolled out V7, the company’s first major model update in a year, featuring upgrades to image quality, prompt adherence, and a voice-capable Draft mode.
OpenAI has reportedly explored acquiring Jony Ive and Sam Altman’s AI hardware startup for over $500M, aiming to develop screenless AI-powered personal devices.
Microsoft has showcased its game-generating Muse AI model’s capabilities with a playable (but highly limited) browser-based Quake II demo.
Anthropic Chief Science Officer Jared Kaplan has stated in a new interview that Claude 4 will launch in the “next six months or so.”
A federal judge has rejected OpenAI’s motion to dismiss The NYT lawsuit, ruling the latter couldn’t have known about ChatGPT infringement before the product’s release.
Amazon Kindle has launched an AI-powered recap feature to summarize book series for readers.
Google has introduced web search capabilities to its NotebookLM tool, allowing users to easily discover and incorporate online sources.
Generative AI is now providing free mental health advice for various DSM-5 recognized disorders.
DeepSeek has introduced a new AI reasoning method in collaboration with Tsinghua University, enhancing its upcoming model’s performance.
Meta GenAI lead Ahmad Al-Dahle has firmly denied allegations that the company trained Llama 4 on benchmark test sets, declaring these claims are “simply not true.”
Runway has released Gen-4 Turbo, a significantly accelerated version of its cutting-edge AI video model that dramatically reduces generation time, producing 10-second videos in just 30 seconds.
Google has expanded AI Mode accessibility to a broader user base while introducing multimodal search capabilities, enabling users to pose complex questions about images using the combined power of Gemini and Google Lens.
Krea has secured $83M in fresh funding, with strategic plans to enhance its unified AI creative platform through the addition of audio capabilities and enterprise-focused features.
ElevenLabs has introduced new MCP server integration, enabling platforms like Claude to seamlessly access advanced AI voice capabilities and create sophisticated automated agents.
University of Missouri researchers have developed an innovative starfish-shaped wearable heart monitor that achieves impressive 90% accuracy in detecting cardiac issues using AI-powered sensor technology.
NVIDIA has released Nemotron-Ultra, a 253B parameter open-source reasoning model that outperforms both DeepSeek R1 and Llama 4 Behemoth across key industry benchmarks.
OpenAI has published its EU Economic Blueprint, proposing an ambitious €1B AI accelerator fund and setting a goal to train 100 million Europeans in essential AI skills by 2030.
Deep Cogito has emerged from stealth with Cogito v1 Preview, introducing a family of open-source models that reportedly surpass the best available open models of comparable size.
Google has rolled out its Deep Research feature on Gemini 2.5 Pro, claiming superior research report generation capabilities compared to competitors while introducing new audio overview functionality.
Chinese scientists have used the Origin Wukong quantum computer to finetune 1B-parameter models, achieving 15% training improvements and 76% reduction in model size.
AI2 and Google Cloud have announced a $20M joint investment to power and accelerate AI-driven cancer breakthroughs through the Cancer AI Alliance’s comprehensive research platform.
Snapchat has debuted Sponsored AI Lenses for brands, leveraging AI-powered advertising technology to transform users into personalized brand engagement experiences.
Anthropic has announced a new premium Claude Max tier with options for $100/mo and $200/mo, offering up to 20x higher rate limits and priority access to new features.
The U.S. government has reportedly halted planned restrictions on NVIDIA’s H20 AI chips to China, following CEO Jensen Huang’s promises of new U.S. investments.
Moonshot AI has released Kimi-VL, a lightweight 3B-parameter vision-language model that matches the performance of models 10x larger on reasoning tasks.
UCL researchers have introduced MindGlide, an AI system that analyzes MS brain scans in seconds and outperforms existing tools by up to 60% in detecting disease progression.
The NO FAKES Act has been reintroduced to Congress, with YouTube, OpenAI, IBM and others joining entertainment leaders in support of legislation to combat AI deepfakes.
OpenAI has launched the ‘Pioneers Program’, aiming to partner with startups on creating industry-specific model evaluations and AI systems for real-world applications.
The EU has unveiled the “AI Continent Action Plan,” committing €200B to build 13 AI factories and aiming to triple data center capacity across Europe within seven years.
Ilya Sutskever’s Safe Superintelligence (SSI) has partnered with Google Cloud to leverage their TPU chips for powering the company’s ambitious research and development initiatives.
Google CEO Sundar Pichai has confirmed that the tech giant will adopt Anthropic’s open Model Context Protocol, enabling Google’s AI models to seamlessly connect with diverse data sources and applications.
Canva has introduced Visual Suite 2.0 at Canva Create 2025, featuring advanced AI capabilities and a revolutionary voice-enabled AI creative assistant that generates fully editable content.
OpenAI has countersued Elon Musk, alleging a pattern of harassment and requesting a federal judge to prevent him from engaging in “further unlawful and unfair action.”
OpenAI has also open-sourced BrowseComp, a sophisticated benchmark designed to measure AI agents’ ability to locate difficult-to-find information across the internet.
TikTok parent ByteDance has announced Seed-Thinking-v1.5, an impressive 200B parameter reasoning model—with 20B active parameters—that outperforms DeepSeek’s R1 model in benchmark testing.
Elon Musk’s AI startup, xAI, has made available its flagship Grok-3 model through API access, with competitive pricing starting at $3 and $15 per million input and output tokens respectively.
AI company Writer has launched AI HQ, a comprehensive end-to-end platform enabling enterprises to build, activate, and effectively supervise AI agents across their organization.
🧰 Trending AI Tools
DreamActor-M1 – Turn images into full-body animations for motion capture
‘Buy for Me’ – Amazon AI agent that makes purchases from other websites
Adobe Premiere Pro – Features like Generative Extend, Media Intelligence
ActionKit – Add 1000+ integration actions to your AI agent
Journey – Turns interest into action.
DreamPress – Generates personalised stories about anything.
AI Profile Picture Maker – Creates awesome profile pictures.
Mockey AI – Free product mockup generator with 5000+ templates.
Llama 4 – Meta’s newest family of open-weights AI models
Microsoft Copilot – New personalization upgrades and agentic actions
Midjourney V7 – New Draft Mode for fast, voice-enabled creation
EverTutor Live – AI-powered voice tutor that teaches, adapts, and interacts
Gemini Live – Real-time visual AI on Android devices and via screen sharing
Runway Gen-4 Turbo – Produce 10-second videos in just 30 seconds
AI Mode – AI search with expanded visual and multilingual capabilities
ElevenLabs MCP – Create automated voice agents on other platforms
Nova Sonic – Amazon’s new speech-to-speech AI voice model
Cogito v1 Preview – New open-source LLM model family trained using IDA
Gemini Deep Research – Now available with 2.5 Pro Experimental
Nemotron Ultra – NVIDIA’s 253B parameter open-source reasoning model
Gemini 2.5 Flash – Google’s new faster and cheaper reasoning model
Kimi-VL – Moonshot’s 3B-parameter VLM matching 10x larger models
Pika Twists – Manipulate objects or characters in footage while keeping the scene intact
askplexbot – Perplexity’s new Telegram channel, allowing users to interact with or bring the chatbot into group chats
The pace of AI innovation is relentless! Which of these updates from Google, Meta, OpenAI, or others excites or concerns you the most? How do you see these developments impacting the future of AI and our daily lives? Share your thoughts and predictions in the comments below!