Today's Guide to the Marketing Jungle from Social Media Examiner...
presented by
We're halfway through the week, Alluser, how's your marketing strategy holding up? Today's tips and insights will help you fine-tune your approach.
In today's edition:
-
The ad campaign you need to consider using again
-
LinkedIn Strategy: Person profiles vs. company pages
-
2 ways to create quality video with Veo 3
-
🗞️ Industry news from Meta, Reddit, and more
Founder's Ad Campaigns
Does your marketing rely on product promotion videos? Need a better way to reach customers and improve your results?
Simple, raw videos where your founder talks about why your business exists outperform slick campaigns across Meta, TikTok, and YouTube. Why?
Strategist Dara Denny explains why these origin-story ads resonate so deeply, even when filmed in your kitchen on an iPhone.
You'll also get a few proven structures—from the classic problem-solution format to objection handling and educational angles—that can turn even camera-shy founders into conversion machines. Watch more here.
A Combined Strategy for LinkedIn
If you're a solopreneur or running a small marketing team, chances are LinkedIn feels like a juggling act. Should you prioritize your personal profile, your Company Page, or both?
Overwhelmed by where to start?
Michelle J Raymond breaks down the Power of Two strategy, showing why it's not about choosing one over the other—but how to use them in tandem to build credibility and drive results.
This episode cuts through the noise and gives you a lens to prioritize based on your unique business goals. Watch more here.
Is AI Leaving You Behind?
Are you unsure where to start with AI? Or are you already using it, but know you could be doing so much more?
Here's the truth: you don't need to master everything with AI. You just need to know your next few steps.
Take our free 3-minute assessment to reveal your AI marketing readiness level and get a 30-day action plan for success.
Are you an Observer, Experimenter, Optimizer, or Transformer? Find out in just a few minutes and get specific next steps based on where you are today. No generic advice, no overwhelming tech speak – just actionable guidance in plain English that you can start using TODAY.
Yes. I'm ready to take my free assessment.
Google Veo3: Creating Pro-Quality AI Video Without a Film Crew
Google Veo 3 is not a free service; a credit system operates as the currency for video generation. Different video types and quality levels require varying credit amounts.
When using Google Veo 3, you can choose between the fast model, which produces high-quality results for about twenty credits per eight-second clip, or the quality model, which costs one hundred credits per clip.
Samuel recommends using Google Veo 3 via your personal Gmail account coupled with one of Google's AI plans. Each of these plans gives you access to other AI tools he uses in his video production process, such as Google Flow and Google Whisk.
With the Pro plan, you can generate fifty video clips using the fast model. This allocation allows for substantial experimentation and content creation while maintaining cost efficiency.
The Ultra plan provide significantly more generation capacity, making it suitable for heavy users or professional content creators who need to produce large volumes of video content.
How to Create Quality Video with Google Veo 3
When you're ready to generate your first video clips, access Google Veo 3 from within Google Flow, and choose your video creation option.
Text-to-Video Generation
Most creators will find text-to-video generation sufficient for their needs, particularly when starting with Google Veo 3. This method provides excellent results while maintaining straightforward workflows that don't require additional image creation or complex setup procedures.
Start by assembling your pre-production elements into a comprehensive prompt that includes the scene description, character DNA (detailed visual description), voice DNA (detailed audio characteristics), and specific dialogue or actions for the scene.
The scene description should include environmental details, lighting conditions, camera angles, and any relevant context.
You should also be explicit about audio requirements. While Google Veo 3 can generate background music and sound effects, Samuel typically instructs the system to focus only on character dialogue and essential scene audio to prevent jarring transitions between eight-second clips that might have different background audio elements.
The reasoning behind this instruction relates to post-production control. Rather than accepting whatever background elements the AI generates, Samuel prefers adding music and sound effects during post-production with your favorite video editing tool, ensuring smooth transitions and consistent audio throughout the final video.
Here is the scene: [detailed scene description]. Here is the character description: [comprehensive visual details]. Here is what I want the character to say: [specific dialogue]. Here is the voice description: [detailed audio characteristics]. Create this scene with character dialogue only, no background music, no sound effects, no ambient audio.
Next, select the generation mode. You have your choice of Fast and Quality.
The choice between Google Veo 3's fast and quality models affects both cost and output characteristics. Samuel consistently uses the Fast model because the quality difference doesn't justify the five-fold cost of the Quality model.
Frames-to-Video
For creators seeking maximum control over their video output, Google Veo 3's frames-to-video feature allows you to use photographs, AI-generated images, or any visual content as the foundation for video creation. The AI analyzes the provided images and creates movement, animation, and visual storytelling based on your additional prompts.
Samuel considers this method "the ultimate when it comes to character consistency" because it starts with exactly the visual you want rather than hoping text descriptions will generate appropriate imagery. When you control the input image, you dramatically improve the consistency and quality of the output.
The pre-production work begins with creating precise images using Google Whisk. Samuel designs individual characters, backgrounds, and complete scenes in Whisk, iterating until he achieves exactly the visual style and character appearance he desires.
He then develops prompts describing how the image should be animated, what characters should say, and what actions should occur within the scene.
This method enables complex scene creation where Samuel can design specific character interactions, environmental details, and visual compositions before animation begins. He might create an image showing two characters in a particular setting, then animate their conversation and interactions through the frames-to-video process.
While frames-to-video requires additional preparation time compared to text-to-video generation, the results justify the investment for creators prioritizing visual quality and consistency.
Once satisfied with the static images, Samuel brings them into Veo 3 via Google Flow and selects the frames-to-video option then generates his clips.
Pro Tip: Use Frames-to-Video With ElevenLabs to Create Video From Images of Real People and Voice Cloning
Samuel has experimented extensively with using personal photographs of himself as input for frames-to-video generation, allowing Veo 3 to create appropriate lip-sync and facial animations for the dialogue. The initial audio doesn't sound like him, but the visual elements are correctly synchronized.
He uploads the final video to ElevenLabs and extracts the original audio track from the generated video and processes it through ElevenLabs' voice cloning system to replace the original AI-generated voice with a cloned version of his voice, maintaining the same timing, cadence, and dialogue content.
This workflow requires training ElevenLabs on the target person's voice characteristics beforehand. Once trained, the system can swap voices while preserving the original timing and synchronization, resulting in videos that feature both the person's actual appearance and their authentic voice.
For businesses wanting to create personalized content featuring actual team members or spokespersons, this methodology provides a pathway to professional-quality video production without traditional filming requirements.
Other topics discussed include:
-
Why Marketers Should Use AI Video Creation Tools
-
What You Need to Know Before Using Google Veo 3
-
How to Use Gemini for Video Storyboarding and Script Development
-
How to Use Gemini to Create Character DNA Profiles to Maintain Character Consistency and Voice Development in Your Video
-
How to Use Google Whisk to Test Video Prompts Before Generating Video in Veo 3
-
How to Download Your Veo 3 Video With Google Flow
Today's advice provided with insights from Leslie Samuel, a featured guest on the AI Explored podcast.
Watch the full interview YouTube
Meta Licenses Midjourney: Meta has struck a licensing deal with Midjourney, a leader in generative AI imagery, to embed its acclaimed aesthetic technology into future Meta models and products. The move enhances Meta's ability to compete with rivals like OpenAI and Google by improving visual features across its platforms and potentially lowering content creation costs. Notably, Midjourney will maintain its independence, operating without external investors even as it scales its impact. Meta Engineering via Threads
Caption Length and Instagram Reach: Instagram chief Adam Mosseri clarified that using long captions has no significant impact on how far a post reaches in the app. Responding during his weekly Instagram Stories Q&A, Mosseri encouraged users to write longer captions if they want to, noting that while not essential, they can enhance storytelling and engagement for those who choose to use them creatively. Social Media Today
Reddit Pro Goes Mobile with New Tools for AI-Era Discovery: Reddit has rolled out a major update to Reddit Pro, now available on iOS, equipping businesses with mobile tools for real-time community engagement and brand monitoring. The update includes new performance insights that track the impact of brand comments, helping businesses understand where and how they're building trust. Reddit also introduced its first-ever Organic Playbook to guide authentic participation and brand strategy on the platform. With Reddit now the top source cited by AI platforms, building an organic presence has become essential for brands seeking visibility in AI-generated recommendations. Reddit
YouTube Rolls Out Templates, Expands Hype, and Boosts AI Summaries: YouTube is enhancing mobile video creation with the launch of templates in its YouTube Create app for Android, giving creators customizable video structures and royalty-free music to streamline production. The platform's Hype feature—designed to elevate smaller creators—is expanding to 17 new markets and will soon include topic-based leaderboards and community-driven sharing tools. Viewers can post hyped videos directly to their feeds, and established creators will be able to spotlight rising talent. Meanwhile, YouTube is broadening access to AI-generated video summaries, offering quick content overviews that complement but don't replace traditional descriptions. YouTube
What Did You Think of Today's Newsletter?
Did You Know?
Only male crickets chirp, and the frequency of chirping increases as the temperature rises. You can estimate the temperature outdoors by counting cricket chirps for a given time period.
Michael Stelzner, Founder and CEO
P.S. Add
michael@socialmediaexaminer.com into your contacts list. Use Gmail?
Go here to add us as a contact.
We publish updates with links for our new posts and content from partners. Your information: Email:
tukangpostoemel@gmail.com Opted in on: 2021-09-06 17:20:47 UTC.