Back to blog

How to use Seedance 2.0 for video ads (step-by-step)

Step-by-step guide to using Seedance 2.0 for video ads — access options, prompt formulas, asset prep, and scaling to 15+ ad variations per day.

February 27, 2026 · Cospark Team

How to use Seedance 2.0 for video ads

Seedance 2.0 can turn a product photo and a text prompt into a finished video ad with native audio, 2K resolution, and multi-shot editing in a single generation. This guide walks through the exact workflow for creating video ads with Seedance, from access and asset prep to prompt writing and scaling output across campaigns.

If you want the broader overview of what Seedance is and how the model works, check our complete guide to Seedance 2.0. This article is about putting it to work.

Where can you actually access Seedance 2.0 right now?

This is the first question everyone asks, and the answer keeps changing. As of late February 2026, here's the situation:

Jimeng (即梦) is ByteDance's own creative platform and offers the fullest Seedance 2.0 experience, including the "All-Round Reference" multi-modal mode and 2K upscaling. The catch: the interface is in Chinese, and you need a Chinese phone number for sign-up. If you can navigate that, it's the most capable option.

Dreamina (dreamina.capcut.com) is ByteDance's international-facing platform. You can sign up with Google, TikTok, Facebook, or email. But as of late February 2026, Seedance 2.0 access on Dreamina is limited to invite-only Creative Partner Program members. If you're in, great. If not, keep reading.

Third-party platforms like ImagineArt, Seedance2ai.online, and others have built browser-based access points. These are the easiest way in for most people — no Chinese phone number, no waitlist. Features vary, and you won't always get the full multi-modal mode, but for basic text-to-video and image-to-video ad generation, they work.

The global API that ByteDance originally planned to release in late February 2026 has been delayed. BytePlus confirmed they're adding copyright protection and deepfake defense mechanisms first. No new date announced. Plan around platform access, not API access, for now.

PlatformFull featuresLanguageSign-up barrierCost
JimengYesChineseChinese phone numberFree tier + credits
DreaminaPartialEnglishInvite-only (CPP)Free tier + credits
ImagineArtPartialEnglishEmailFree trial, then paid
Seedance2ai.onlinePartialEnglishEmailFree tier available

What assets do you need before you start?

Most people jump straight to prompting. That's a mistake. Seedance 2.0's biggest advantage over Sora or Veo is its multi-modal input. It can fuse up to 12 reference files (images, videos, audio) in a single generation. The quality of your inputs directly controls the quality of your output.

Here's what to prepare before you open the platform:

Product photography. Clean, well-lit product shots on a white or neutral background. You want at least 3-5 angles. Seedance uses these as visual anchors, so blurry phone shots will give you blurry ads. Invest 20 minutes in good lighting. It pays off across every generation.

Brand reference images. Pull 2-3 screenshots of existing ads or visual content that match your brand's aesthetic. Color palette matters. Seedance picks up on the tonal and compositional patterns in your references and carries them into the generated video.

Audio references (optional but powerful). If you have a brand jingle, a specific music vibe, or a voiceover clip, upload it. Seedance 2.0 generates native audio alongside video, and giving it an audio reference steer dramatically improves the sound design. Without a reference, you'll get generic ambient audio that may or may not match your brand.

A motion reference video (for the All-Round mode). This is the part most people skip, and it makes the biggest difference. Find a video that has the camera movement and pacing you want: a slow dolly-in on a product, a handheld UGC-style walkthrough, a quick-cut montage. Upload it alongside your product photo. Seedance will replicate the motion language while placing your product in the scene.

How do you write prompts that actually work for ads?

The prompt formula that works best with Seedance 2.0 follows a specific structure. Vague prompts get vague results. Treat this like a creative brief for a videographer.

The formula: Subject + Action + Scene + Camera Language + Style + Constraints

Here are some examples that work:

"A matte black wireless earbud case sits on a dark marble surface. Camera slowly dollies in from a 45-degree angle. Warm key light from the left, subtle rim light from behind. Shallow depth of field. Cinematic, 4K, product photography aesthetic. Stable motion, no artifacts."

"A woman in her late 20s, casual outfit, picks up @product_image from a desk and shows it to camera. Handheld, slightly shaky, natural daylight from a window. She smiles and nods. iPhone selfie camera look. Warm color grade, authentic UGC feel. Smooth face rendering."

"Close-up of hands opening @product_image packaging on a kitchen counter. Pull back to reveal a modern apartment interior, morning light streaming through windows. Cut to a person using the product while sipping coffee. Lifestyle ad aesthetic, soft warm tones. Two shots, natural transition."

Notice the pattern. Every good Seedance ad prompt does four things:

  1. Describes the subject and action concretely. Not "someone using a product." Instead: "a woman in her late 20s picks up the earbud case from a desk."
  2. Specifies the camera. "Slow dolly-in from 45 degrees" or "handheld, slightly shaky" or "close-up, then pull back." Seedance understands camera language. Use it.
  3. Sets the lighting and mood. "Warm key light from the left" or "natural daylight from a window" or "morning light streaming through windows." Lighting is half the production value.
  4. Adds constraint phrases. "Stable motion, no artifacts," "smooth face rendering," "stable face" — these reduce the weird AI glitches. Always include them.

Use @tags to reference your uploaded files in the prompt. For example, @product_photo or @motion_reference. This tells Seedance exactly which uploaded asset maps to which part of the prompt. Without @tags, the model may ignore some of your references.

What's the step-by-step workflow for creating a video ad?

Here's the process from start to finished ad, assuming you've already prepared your assets.

Choose your aspect ratio first

Before writing any prompts, decide where the ad is going. Instagram Reels and TikTok need 9:16. YouTube pre-roll needs 16:9. Feed posts need 1:1. Setting the aspect ratio first ensures Seedance composes every shot for the right frame. Changing it after generation means re-doing everything.

Upload your references

Upload your product photos, brand reference images, and any audio or video references. Use the All-Round Reference mode if your platform supports it. This lets you combine a product photo with a motion reference video, which gives the best results for ad content. You can upload up to 9 images, 3 videos (total under 15 seconds), and 3 audio files (MP3, total under 15 seconds).

Write your prompt using the formula

Use the Subject + Action + Scene + Camera + Style + Constraints structure. Reference your uploaded files with @tags. Be specific about camera movement and lighting. Include constraint phrases like "stable motion, smooth face" at the end.

Generate and evaluate

Hit generate. Seedance 2.0 typically returns a clip in under 60 seconds. Watch it. Check for: product accuracy (does it look like your actual product?), motion quality (any weird warping or jitter?), audio match (does the sound fit the visual?), and overall brand feel (would you post this?).

Iterate on what's off

If the camera movement is wrong, add more specificity: "slow zoom-in" instead of just "zoom." If the lighting is flat, try "dramatic side lighting with deep shadows." If the face looks off, add "photorealistic human face, stable expression, no distortion." Each iteration usually takes 30-60 seconds. Plan for 3-5 rounds before you get something solid.

Post-production polish

Raw Seedance output is good but usually needs a final pass. Color grading, sharpness adjustments, and audio leveling bring it to broadcast quality. You can do this in any video editor, or use an AI editing tool that handles the finishing touches automatically.

How do you scale Seedance ad production beyond one-offs?

Making a single video ad with Seedance is cool. Making 15 variations per day for A/B testing across TikTok, Meta, and YouTube is where it actually changes your workflow.

The key is building reusable prompt templates. Take your winning prompt, swap out the variable parts (product image, action, scene), and keep the constants (camera style, lighting, brand aesthetic, constraint phrases). Save these as templates. One marketer can realistically produce 10-15 high-quality ad variations per day this way — compared to maybe one per day with a traditional editor.

Here's a practical scaling approach:

Hook testing. Write 5 different opening scenes for the same product. Keep the same camera style and brand aesthetic across all of them. Generate all 5, pick the 2-3 that grab attention fastest, and run them as split tests on TikTok or Meta. You burned maybe 30 minutes instead of 3 days of shoot time.

Platform adaptation. Take your hero ad and regenerate it in 9:16, 16:9, and 1:1. Adjust the prompt slightly for each format. Vertical formats work better with close-ups and face-to-camera shots. Horizontal formats give you room for wider scene-setting. Three platform-ready versions from one concept in under 10 minutes.

Multi-language versions. Seedance 2.0 generates native lip-synced speech. You can produce the same ad concept in English, Spanish, and Japanese by changing the dialogue portion of your prompt and referencing the original video as a motion reference. The model maintains visual consistency while generating new dialogue. This used to require three separate voiceover sessions and manual lip-sync editing.

Product lineup rollouts. If you sell multiple products, swap the product image reference while keeping everything else identical — same camera movement, same lighting, same scene. You get consistent brand feel across your catalog without reshooting.

What are the current limitations for ad creators?

I'd be lying if I said Seedance 2.0 is ready to replace your entire production pipeline. Here's where it falls short as of February 2026:

Length limits. You're working with 4-15 second clips per generation. For a 30-second ad, you need to generate multiple clips and stitch them. This works but adds complexity and can introduce visual inconsistency between clips.

Text rendering. If your ad needs on-screen text (prices, CTAs, product names), don't rely on Seedance to generate it. AI video models still struggle with readable text. Add it in post-production.

Brand precision. The model picks up general brand aesthetics from references but won't perfectly replicate your exact hex colors or font choices. It gets close, not exact. For brand-critical elements like logos, composite them in post rather than hoping the model nails them.

Access fragmentation. The best features (All-Round Reference, 2K upscaling) are only available on Jimeng, which is locked behind a Chinese-language interface and phone number requirement. International platforms have partial feature sets. This will improve as the API launches, but right now it's a real friction point.

Legal uncertainty. ByteDance is navigating copyright issues after the model was used to generate videos of real celebrities and copyrighted characters. The delayed API release reflects this. If you're using Seedance for commercial ads, stick to original characters, your own products, and licensed elements. Don't generate ads featuring real people you don't have rights to.

How does Seedance compare to other AI video tools for ads?

Quick rundown against the models ad creators are actually considering:

FeatureSeedance 2.0Sora 2Veo 3.1Kling AI
Native audioYesNoPartialNo
Max resolution2K1080p4K1080p
Multi-modal input4 typesText + imageText + imageText + image
Max clip length15s20s8s10s
Multi-shot editingYesNoNoNo
Global accessFragmentedAPI + ChatGPTAPI + VertexWeb + API
Ad-specific featuresTemplate replicationNoneNoneNone

Seedance wins on input flexibility and audio. Sora wins on accessibility. Veo wins on raw visual fidelity. Kling wins on massive brand awareness and a stable platform. For ad creators specifically, Seedance's multi-modal input and template replication features make it the strongest choice — if you can get reliable access.

The practical answer for most teams? Use multiple models depending on the job. Run Seedance for multi-modal ads where you have product photos and want native audio. Use Sora or Veo for quick text-to-video concepts. Use Kling for consistent, no-hassle generation.

Frequently asked questions

Is Seedance 2.0 free to use?

Most platforms offer a free tier or trial credits. Jimeng gives free daily generations. Third-party platforms like ImagineArt and Seedance2ai.online offer free tiers with limited generations, then paid plans starting around $10-20/month for regular use. There's no official API pricing yet since the global API hasn't launched.

Can I use Seedance 2.0 for commercial video ads?

Yes, as long as you're generating original content with your own products and references. Don't generate content featuring real celebrities, copyrighted characters, or trademarked material. ByteDance's terms of service allow commercial use of generated content, but the legal landscape is evolving. Keep your generations original and you're on solid ground.

What's the best aspect ratio for Seedance video ads?

It depends on the platform. Use 9:16 for TikTok, Instagram Reels, and YouTube Shorts. Use 16:9 for YouTube pre-roll and landscape display ads. Use 1:1 for Facebook and Instagram feed ads. Set the aspect ratio before you generate, not after.

How long does it take to generate a video ad with Seedance 2.0?

A single clip takes 30-60 seconds to generate. Plan for 3-5 iterations to get a polished result, so roughly 10-20 minutes for one finished ad. Once you have prompt templates dialed in, you can produce a new variation in under 5 minutes.

Can Seedance 2.0 generate ads in multiple languages?

Yes. Seedance 2.0 supports multi-language speech generation with lip sync. You can generate the same ad concept in different languages by changing the dialogue portion of your prompt and referencing the original video. The model maintains visual consistency across language versions.

Last updated: February 28, 2026