The service allows broadcasters and streamers to adapt landscape video into vertical formats for platforms like TikTok, Instagram Reels and YouTube Shorts in real time. The process removes the need for manual post-production or specialized AI expertise.
AWS Elemental Inference uses an agentic AI application to analyze video and apply optimizations autonomously. Detection of vertical video cropping and clip generation occurs independently, requiring no human intervention to extract value from the stream.
The system achieves 6-10 second latency, compared to the minutes required for traditional post-processing. This “process once, optimize everywhere” method allows multiple AI features to run simultaneously on the same video stream without the need to reprocess content for each capability.
The service integrates with AWS Elemental MediaLive, a provider of live video processing, enabling AI features without modifications to existing architecture. It uses fully managed foundation models that are automatically updated and optimized.
Key features at launch include:
AWS Elemental Inference is available in four regions: US East (Va.), US West (Ore.), Europe (Ireland) and Asia Pacific (Mumbai). Pricing is consumption-based with no upfront costs or commitments.
The post AWS Launches AI-Powered Vertical Video Transformation Service appeared first on TV News Check.
Forrester has published a new white paper sponsored by Simpplr that examines how genAI and…
At GrafanaCON ’26 in Barcelona, the company has introduced new AI observability tools. It claims…
Grafana Labs has dropped its biggest update in years. Grafana 13 is about open observability.…
Eleven months ago, Xactly announced a collaboration with ServiceNow to provide the sales industry enhanced…
In volatile markets, it’s important that businesses recognise change and respond accordingly. Supply chain disruption,…
Last year, Hasbro debuted one of its most unusual and interesting Transformers collaborations ever with…
This website uses cookies.