JSON

Important informations about CapCut in 2025

In 2025, CapCut moved beyond being just a “TikTok companion” and evolved into a powerhouse for both professional and everyday creators. The year was defined by a massive shift toward generative AI and a major restructuring for users in the United States.

Here are the most important updates and news from CapCut in 2025:

1. The “AI Lab” Launch

CapCut consolidated its most advanced generative tools into a new section called AI Lab. This hub introduced features designed to turn simple ideas into finished content:

  • Instant AI Video: Users can now input a script or simple prompt, and the AI generates a complete video with relevant visuals, captions, and music.
  • AI Characters & Avatars: Expanded lifelike AI avatars that can “speak” your script, which became a favorite for creators making faceless YouTube or TikTok channels.
  • AI Video Enhancer: A one-tap tool released late in the year that boosts resolution, performs color correction, and reduces noise in low-quality footage.

2. Major Policy Update (June 2025)

In June, CapCut released a significant update to its Terms of Service (TOS) that sparked a lot of conversation among professionals.

  • Content Licensing: The updated terms clarified that by uploading content (including private drafts) to their servers, users grant ByteDance a worldwide, perpetual, and royalty-free license to use, modify, and even monetize that content.
  • Likeness Usage: The license includes your face and voice, meaning clips could technically be used by the platform for promotional purposes without further consent.

3. “CapCut US” & Migration

Due to regulatory changes, a separate CapCut US version was introduced in September 2025 to house American user data domestically.

  • Data Firewall: This version creates a bridge specifically for U.S. creators to ensure compliance while keeping the same editing interface.
  • Migration Deadline: U.S. users were given until early 2026 to migrate their cloud projects to the new infrastructure before the legacy global app ceased to function in the region.

4. B2B Pivot: CapCut Commerce Pro

CapCut made a major move into the business space by launching Commerce Pro, a subscription tier specifically for e-commerce brands and dropshippers.

  • URL-to-Video: Users can paste a product link (from sites like Shopify), and the AI automatically pulls product photos and descriptions to generate a promotional ad.
  • AI Model Try-On: A specialized tool that allows virtual avatars to “wear” clothing products for digital catalogs.

5. Advanced Editing Fundamentals

While AI took the spotlight, CapCut also improved its core editing suite:

  • Transcript-Based Editing: Much like professional tools, you can now edit a video by simply deleting words from the auto-generated transcript.
  • Multi-Track Audio: Added support for layering multiple audio tracks with much finer precision, making complex sound design easier on mobile and desktop.

CapCut 2025 AI Features Tutorial

This video provides a practical walkthrough of the AI tools released during the year, including the Instant AI Video maker and the AI Avatar feature.

CapCut AI Video Maker – New AI Features in 2025 (Full Tutorial) – YouTube

Gemini news and informations in 2025

2025 was a massive “agentic” era for Gemini, transitioning from a chatbot into a proactive assistant capable of reasoning and taking action.

Here are the most important news and updates from last year:


The Evolution of Gemini Models

Google moved through two major generations of the model in 2025, focusing on speed and “deep” thinking.

  • Gemini 2.0 (Early 2025): Launched in early February, this generation introduced Gemini 2.0 Flash (for speed) and Gemini 2.0 Pro (for complex coding). It featured a massive 2-million-token context window, allowing it to process thousands of pages of text or hours of video at once.+1
  • Gemini 2.5 (Mid 2025): Announced at Google I/O in May, this update introduced Deep Think mode. This allows the model to “pause and reason” through multiple hypotheses before answering, significantly improving performance in advanced math and science.+2
  • Gemini 3.0 (Late 2025): Released in November, Gemini 3 became the new flagship. It focused on native multimodality, meaning it processes audio, video, and text simultaneously with almost zero latency, making conversations feel truly human.

Major Feature Releases

Google expanded where and how you can use Gemini with several “drops” throughout the year.

  • Gemini Live (Now Free): Previously a paid feature, Gemini Live (the conversational voice mode) became free for all iOS and Android users in May 2025. It also added a camera-sharing feature, letting you point your phone at a broken sink or a math problem to get real-time help.
  • Project Mariner & Agent Mode: Google introduced “Agents” that can actually use your browser to do things for you—like booking flights, comparing products, or filling out forms—instead of just telling you how to do it.
  • Gemini in Chrome: Gemini was integrated directly into the Chrome browser (Windows/macOS), allowing you to summarize any webpage or ask questions about the site you are currently viewing without leaving the tab.

Creative & Professional Tools

  • Flow & Veo 3: Google launched Flow, a filmmaking tool powered by the Veo 3 video model. It can generate 4K video clips with native audio (sound effects and speech) directly from text prompts.
  • Gemini Canvas: A new workspace for writers and designers that can transform a simple research report into a full website, infographic, or interactive quiz.
  • Google AI Ultra: A new $250/year subscription tier was launched, giving “power users” first access to experimental models like Gemini 3 Deep Think and higher limits for video generation.

Integration Highlights

  • Android 16 & Pixel 10: Gemini became the “heart” of the new Android OS, handling on-device tasks like real-time translation and “Smart Reply” which learns your specific writing style from Gmail and Drive.
  • Google Home: Gemini now writes “event descriptions” for your Nest cameras (e.g., “The cat is playing with a box in the hallway”) and allows you to create complex home automations using natural language.

2026 is truly the year where Gemini has shifted from a conversational partner to an action-oriented agent. Setting these up involves moving beyond just typing prompts and into “configuring” how Gemini interacts with your browser, apps, and code.

Here is a guide to setting up and using the latest agentic features:


1. Project Mariner (Web Agent)

This is the feature that allows Gemini to actually “drive” your Chrome browser to complete tasks like booking flights or researching venues.

  • How to Set Up:
    1. Go to labs.google.com/mariner.
    2. Install the Project Mariner extension from the Chrome Web Store.
    3. Pin the extension to your bar for easy access.
  • How to Use:
    • Open the extension sidebar while on any website.
    • Give a command like: “Compare the top 3 hotels in Rome for under $200 and draft an email to my partner with the options.”
    • Watch the Live View: You will see the agent opening tabs and clicking buttons. You can click “Take Over” at any point if you want to finish the task manually.

2. Gemini 3 “Deep Think” Mode

If you have a complex problem that requires reasoning (like a business strategy or a difficult coding bug), you can toggle this mode to make Gemini “think” before it speaks.

  • How to Activate:
    1. Open the Gemini app or go to gemini.google.com.
    2. Ensure you have Gemini 3 Pro selected in the model picker.
    3. Look for the Deep Think toggle (usually a brain or sparkle icon) in the prompt bar.
  • What to Expect: It may take 30–60 seconds to respond because it is evaluating multiple solutions in the background. You can even view a “Thought Summary” to see its internal logic.

3. Gemini Agent (Personal Assistant)

This handles multi-step tasks across your Google Workspace (Gmail, Calendar, Drive).

  • How to Enable:
    1. Go to Settings > Extensions in the Gemini app.
    2. Ensure Google Workspace and Personal Intelligence (Beta) are toggled ON.
  • Practical Example:
    • “Organize my inbox for the week: prioritize emails from my clients and draft replies for the ones asking for quotes.”
    • The agent will create a list of “Proposed Actions” for you to approve before it sends anything.

4. Agent Mode for Developers

If you use VS Code or Android Studio, the new “Agent Mode” can now write and fix code across multiple files autonomously.

  • Setup:
    1. In your IDE, open the Gemini Code Assist chat.
    2. Toggle the Agent switch at the top of the chat window.
  • Pro Tip: Create a file named AGENT.md in your project root. Write your coding style and rules there; the agent will read this file every time it performs a task to ensure it follows your specific standards.

Chat GPT news and informations in 2025

In 2025, ChatGPT shifted from being a “chatbot” to a comprehensive AI platform and operating system. The year was marked by the release of the highly anticipated GPT-5 family and the expansion of reasoning-focused models like the o3 series.

Here is a breakdown of the most important information and news regarding ChatGPT in 2025:

1. The Launch of GPT-5

OpenAI released several versions of GPT-5 throughout the year, moving away from a single model toward specialized tiers:

  • GPT-5.2 (Thinking/Pro): Released late in the year, these models focus on “Deep Reasoning.” They are designed for mission-critical tasks in legal, finance, and healthcare, featuring significantly fewer hallucinations and better multi-step planning.
  • GPT-5 Instant: A faster, more efficient model that replaced GPT-4o for everyday tasks, offering lightning-fast responses with better conversational nuance.
  • Integrated Memory: ChatGPT now features “Long-term Memory” across sessions, meaning it remembers your preferred writing style, database schemas, or project details without you needing to repeat instructions.

2. The “o3” Reasoning Series & Deep Research

Building on the success of the o1 model, OpenAI released the o3 series in early 2025 to compete with rivals like DeepSeek.

  • OpenAI Deep Research: A standout feature launched in February 2025. It uses the o3 model to browse the web for up to 30 minutes, synthesizing information into a comprehensive, cited report.
  • Reasoning Effort Controls: Users can now toggle between “Low,” “Medium,” and “High” reasoning effort. Higher effort allows the AI to “think” longer to solve complex math or coding problems.

3. ChatGPT as a Platform (In-Chat Apps)

At DevDay 2025, OpenAI introduced a major UI shift: Integrated Apps.

  • Direct In-Chat Actions: You can now call specific apps directly within a conversation. For example, typing “Spotify, make a workout playlist” or “Zillow, find houses in Tekirdağ” triggers a mini-app interface inside the chat window.
  • Instant Checkout: OpenAI introduced commerce integrations, allowing users to buy products or book flights directly through ChatGPT without leaving the app.

4. Advanced Voice & Multimodal Features

  • Sora 2 Integration: OpenAI’s video generation tool, Sora 2, was integrated more deeply, allowing ChatGPT to generate short, high-fidelity video clips from text prompts.
  • Interactive Voice: The Advanced Voice Mode became the default, allowing for real-time, emotional conversations. It can now “see” through your camera to discuss your surroundings or help with physical tasks in real-time.

5. Summary of 2025 Key Releases

Feature/ModelRelease PeriodKey Impact
o3-miniJan 2025High-speed technical reasoning for free/paid users.
Deep ResearchFeb 2025Automated professional-grade research reports.
GPT-5 / 5.2Mid-Late 2025Major leap in logic, memory, and reliability.
Apps SDKOct 2025Allowed 3rd parties to build tools inside ChatGPT.
Your Year in GPTDec 2025A “Spotify Wrapped” style summary of your AI usage.

In 2026, OpenAI has significantly restructured its pricing to accommodate casual users, while also introducing a powerful “Deep Research” mode for professionals.

1. ChatGPT Pricing Tiers (2026)

The biggest news is the global rollout of ChatGPT Go, a mid-range tier designed for those who need more than the free version but don’t want to pay the full price for Plus.

TierMonthly PriceBest For…Key Features
Free$0Casual useAccess to GPT-5.2 (limited), Basic Search.
Go$8Power casuals10x more messages than Free; uses GPT-5.2 Instant; includes ads.
Plus$20ProfessionalsGPT-5.2 Thinking model; No ads; Sora 2 video access; 25 Deep Research queries.
Pro$200AI Power usersGPT-5.2 Pro; 250 Deep Research queries; Max context window (millions of tokens).

Note on “Go” Tier: While affordable, the Go tier does not include the Sora 2 video generation and will display contextually relevant ads at the bottom of the chat interface.


2. How to Use Deep Research Mode

Deep Research is no longer just a “Google search on steroids”; it is an autonomous agent that can take 5 to 30 minutes to complete a task.

How to Start:

  1. Select the Tool: Click the “+” (Tools menu) in the chat bar or type /deepresearch directly.
  2. Define Your Sources: A new 2026 feature allows you to specify sites. You can tell it to “Only use academic journals” or “Prioritize technical documentation from GitHub.”
  3. Review the Research Plan: Before it starts, ChatGPT will present a “Research Plan” (e.g., “I will first look at X, then compare it to Y”). You can edit this plan to save time.
  4. The Fullscreen Viewer: Once finished, the report opens in a new fullscreen document view with a clickable Table of Contents and a sidebar showing every source it cited.

3. Why Use Deep Research vs. Standard Search?

  • Standard Search: Best for “What is the weather in New York?” or “Who won the game last night?” (Instant response).
  • Deep Research: Best for “Compare the last three years of solid-state battery patents between Toyota and QuantumScape and write a 10-page report.” (Takes ~15 minutes).
Translate »