Quantcast
Channel: Softvelum news: Nimble Streamer, Larix Broadcaster and more
Viewing all 436 articles
Browse latest View live

The State of Streaming Protocols - 2016 summary

$
0
0
Softvelum team which operates WMSPanel reporting service continues analyzing the state of streaming protocols.

As the year of 2016 is over, it's time to make some summary of what we can see from looking back through this past period. The media servers connected to WMSPanel processed more than 34 billion connections from 3200+ media servers (operated by Nimble Streamer and Wowza). As you can tell we have some decent data to analyse.

First, let's take a look at the chart and numbers:
The State of Streaming Protocols - 2016
You can compare that to the picture of 2015 protocols landscape:


The State of Streaming Protocols - 2015
In the end of 2015 it had being collected from 2300+ servers.

What can we see?

  • HLS is pretty stable at ~3/4 of all connections, its share is strong 73%. It's a de-facto streaming standard now for consuming content on end user devices.
  • RTMP continues to go down - it's fallen to 10% from 17%. Low latency streaming still requires this protocol so it won't go away and will have its narrow niche.
  • RTSP keeps playing same roles of real-time transmission protocol for real-time delivery but lack of players support will have it shrinking - like we see it now with 4%, less than last year.
  • Progressive download has strengthen its position to get 7% of the views.
  • Icecast streaming feature set was heavily improved in Nimble Streamer in 2016 so you can see its share growing to 3% of all connections. Online radios and music streaming services are popular so we do our best to help them bring music to the people.
  • MPEG-DASH overcame HDS (which seem to fade away) and now competes with SmoothStreaming.
  • MPEG-TS feature set was also improved to allow using it in more use cases, so we see its count to increase by nearly 10 times. It has its niche so we'll continue enhancing it.


If you see any trends which we haven't mentioned - please share your feedback with us!

We'll keep analyzing protocols to see the dynamics. Check our updates at FacebookTwitter or Google+.


Adding multiple audio tracks for ABR HLS live streams

$
0
0
Live streaming scenarios of Nimble Streamer include ABR (adaptive bitrate), it can be accomplished via HLS and MPEG-DASH.

Previously we introduced ability to use multiple separate video and audio tracks in same VOD stream. 

Now we've added multiple audio tracks support for live ABR streams. This allows assigning audio streams from any incoming source to ABR streams and define corresponding language to each of them. Once such stream is defined, a player may provide the language selection in order to get proper audio stream.

Let's see how it's set up.

We assume you have all incoming video and audio streams being available before the ABR setup. You can check corresponding articles about RTMP, RTSP, MPEG-TS and Icecast to see all details regarding the setup process.
You can also use Live Transcoder to create multiple video renditions and make audio processing if needed.

In this article we assume you have 2 renditions of video and 2 separate audio streams, one for English and one for Spanish.

Go to Nimble Streamer -> Live Streams menu to see the streams overview page. There you need to select Adaptive stream tab to see the following page.

ABR streams page
Click on Add ABR setting to see the following dialog.
Set ABR stream parameters.


Here you first need to specify output ABR application and stream name to be used in URL.

Then you need to add sources to take video and audio from. As you see we've added 2 video sources and then 2 audio sources.

Each audio source has Audio only flag which defines that current stream will be used for audio track selection. Also each audio stream has Language field where you need to specify 3-chracters language code and then optionally add track Name which will be used for display in player.
Default checkbox defines which stream will be taken for playback by default. Autoselect is used for selecting the track according to default language settings of the system. Please refer to HLS standard for more information regarding these 2 check boxes definition.

Now select which servers to apply this new ABR setting to and click on Save.


Once the details are synced up with the server, you will be able to click on question mark icon on the right of the stream and try to play it and to get the stream URL for further usage.

The resulting HLS playlist would be as follows.

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="eng",DEFAULT="YES",AUTOSELECT="YES",NAME="English",URI="live/streamaudioeng/chunks.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="spa",AUTOSELECT="YES",NAME="Spain",URI="live/streamaudiospa/chunks.m3u8"
#EXT-X-STREAM-INF:BANDWIDTH=395844,RESOLUTION=1280x720,CODECS="avc1.4d401f",AUDIO="audio"
live/streamvideo_720p/chunks.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=395844,RESOLUTION=320x180,CODECS="avc1.4d400c",AUDIO="audio"
live/streamvideo/chunks.m3u8

As you see, there is "audio" group of 2 streams and it's used in respective 2 video streams.
Now you can use some proper player which supports HLS multiple audio tracks. 

Notice that audio tracks will also be available for respective MPEG-DASH ABR streams. It's currently supported in limited number of players but the capability itself is available for your usage.


If you have any questions regarding any functionality, just let us know about it. Describe your use case and we'll provide the best way to use Nimble Streamer to build your streaming solution.

Related documentation


Live Streaming features, Live Transcoder for Nimble Streamer, RTMP feature set, Build streaming infrastructure, ABR control API

December news

$
0
0
The year of 2016 is almost over so prior to posting a year summary we'd like to highlight some significant updates from December.

Live Transcoder

Live Transcoder decoding and encoding capabilities were improved:

We've also added an article about setting constant bitrate for transcoding with x264.

This all improves efficiency and overall user experience of our Live Transcoder.

Nimble Streamer

Nimble has interesting update for ABR. You can now add multiple language streams into ABR stream. So you can combine N video and M audio streams for further usage in any player.


Icecast

Icecast live metadata can now be added to any outgoing Icecast stream using WMSPanel UI. Read this article for details. This is a good addition to existing Icecast metadata pass-through.

Read about more audio streaming scenarios supported by Nimble Streamer.


Larix mobile SDK

Larix mobile SDK has been updated.

Android SDK has several new features:

  • Set custom white balance, exposure value and anti-flicker;
  • Long press preview to focus lens to infinity, double tap to continuous auto focus.
  • Use volume keys to start broadcasting;
  • Use a selfie stick  to start broadcasting;

iOS SDK has minor fixes and improvements for streaming.

Use this page to proceed with SDK license subscription.



In a few days we'll also release a yearly summary of our company. 


Follow us at FacebookTwitter or Google+ to get latest news and updates of our products and services.

Year of 2016 overview

$
0
0

Happy New Year!


We wish you and your company all the best in the new year of 2017!

To celebrate this upcoming year, we would like to make a summary of what was done during past 12 months of 2016.

You might have noticed the mention of Softvelum, LLC in many updates. That's right - we've grown from a startup established in 2011 into a technology company so this incorporation was performed to build a platform for further activities. We've been strengthening our team this year and we're ready for new challenges!
As a company, we're honored to have Nimble Streamer as a finalist of Streaming Media Europe Readers Choice Awards 2016 in the "Best Streaming Innovation" nomination!
We appreciate this acknowledgment and we'd like to thank everyone who voted for us.

Before getting into products summary, let's take a look at the State of Streaming Protocols for 2016. It shows current status of streaming technologies and compares it to the data back from 2015.

Now, let's see what was new in our products set.

Nimble Streamer


Our flagship product was continuously improved through the year. Here are most notable enhancements from what was implemented.

Nimble Streamer is now available for ARM architecture and its installation packages are built for for Raspberry Pi / Orange Pi / Odroid.
Read this page for all details regarding Nimble Streamer embedding capabilities.

Transmuxing engine was improved to improve performance and take more codecs as input. New codecs include H.265, VP6, VP8 and VP9 as video input with AC3, E-AC3, Speex and PCM G.711 for audio input. This gives more capabilities for broadcasters who receive media from various source and want to bring it to the internet viewers.

Icecast transmuxing was also added into common engine and now it allows taking AAC audio from any input stream and deliver it both as playback stream and as republished stream. You can also define metadata for any outgoing Icecast stream. Read more about current audio streaming feature set here.

DVR feature set was also improved through the year to improve performance and also get timeshift scheduled recording.

Subtitles support was added for VOD streaming to follow with multiple customers requests.

Nimble control API was significantly improved, it now covers full set of operations that are available via web UI. Also, Nimble status native API was updates with new calls.

Server control capabilities can now be granted to non-admin users by account administrators.

Paywall framework for Nimble Streamer was also improved with new blocking capabilities: by User agents and by Referer HTTP headers.

One more significant feature is Publish Control Framework. It allow setting full control over RTMP and RTSP streams publication process - you can apply your own business logic to any incoming streams. This is especially useful for people who create their own mobile broadcasting services as they need to find a way to control their content contributors.

Live Transcoder


We had many questions from our customers about content transformation in Nimble Streamer - like changing resolution, bitrate, graphics overlay and many other. So after technical and legal research, we've released Live Transcoder. It's a premium addition for freeware media server available via license subscription.

It supports multiple video and audio codecs for decode and encodes it into H.264/AAC output streams with codecs passthrough option.

After the content is decoded you can also apply wide variety of FFmpeg filters to transform the content as you need for your live stream. This includes changing the resolution, bitrate, profile, applying graphics overlay, adding picture-in-picture video and much more.

All operations are performed via excelled drag-n-drop web UI, you can check this playlist of videos to see it in action.

Besides software decoding and encoding, Live Transcoder supports Intel QuickSync and NVidia GPU hardware acceleration technologies. This allows combining flexible filtering, excellent UI and best hardware transcoding technologies into a single powerful product.

Visit Live Transcoder website to learn more about its capabilities.

Mobile SDK


Larix mobile broadcasting SDK was extremely popular during this year.

We improved feature sets for both iOS and Android to add highly required features - you can see list of major features here and also check Larix Broadcaster to see them in action.

After multiple requests we've released our mobile streaming SDK for Windows Mobile platform along with Larix Broadcaster WinPhone version. So now our SDK covers all popular mobile platforms.

We also used our streaming library in new app called Larix Screencaster - it's an Android application that allows streaming the content of your screen. This is helpful for game streamers, educators and all those who need to share their device screens to wide audience.

You can obtain latest SDKs using this purchase page in just a couple of steps .

WMSPanel


Our reporting and control panel was also improved with a new set of metric - Unique visitors. It allows calculating how many actual users were viewing your content within each slice and each file or stream in case of Deep stats usage.

WMSPanel stats API was also improved to return all available metrics with more convenient methods.


These are the basic updates but we've done a lot more than that - check our blog and website for more. You can use the new Search page which covers our resources.




We'll continue keeping you up-to-date with all new features and improvements.
Follow us at FacebookTwitter or Google+ to get latest news and updates of our products and services.

Binding un-synced video and audio sources in Live Transcoder

$
0
0
Live Transcoder for Nimble Streamer has wide range of content transformation features which can be used in transcoding scenarios.

Some scenarios may require to take unrelated and un-synchronized sources of content and bind them together into a synchronized stream. This capability covers some major use cases like the following to name just a few:

  • YouTube does't accept audio-only and video-only content, so you need to add missing video/audio in order to comply.
  • Take video stream (e.g. a game footage) and put commentator's voice on top.
  • Online radio graphics overlay: take Icecast online radio stream, put still picture as video and publish it on a website or an external CDN so common video player could play sound and has was visual representation.
  • Take surveillance camera video stream, insert some sound of silence from MP3 or MP4 file and publish it to external destination - like aforementioned YouTube or CDN.

Nimble Streamer Transcoder is currently capable of both transforming file content for further live streaming usage and synchronizing it. Let's see how you can do this.

Installing Nimble Streamer Live Transcoder


First of all you need to have Nimble Streamer installed, as well as Live Transcoder


You can check basic principles of Transcoder setup using our video tutorials in this playlist.

Engaging graphics and on-demand content into live transcoding


Nimble Streamer can transform the following file types into live stream:

  • GIF
  • PNG
  • MJPEG
  • BMP
  • TIFF
  • MP4 container with H.264 video and AAC or MP3 audio
  • MP3 audio files for audio streams

Video decoder


To use files in transcoding scenario you need to add the video decoder element to your scenario and choose File as decoding source as shown below.

Adding picture as a source for video decoder
You will see File path input field for entering full local path to a source file.

For still images (GIF, PNG, TIFF, MJPEG) the FPS field specifies the key frame rate which will be used for video stream. In case you have GIF animation and its frame rate differs from the one you specify, then its playback speed will be more or less frequently. E.g. if you have 15 images per second in your GIF and set FPS field to 30 then your GIF will be animated twice faster. The Stream ID parameter is ignored for still images.

For MP4, the FPS parameter will be ignored while Stream ID can be used in case you have multiple video tracks. In that case you will just set the value to track number, starting from 0.

The Decoder field is used for defining the engine used for decoding the input - currently it's either Default software decoder or NVidia. Read more about software decoder threads and NVidia decoder settings. The Threads field is used for software default decoder.

Audio decoder


Audio source is set similar way. drop Audio decoder element to the scenarios workspace and choose File.

Adding audio file as a source for audio decoder

You can specify File path to audio content. If you use MP4 with multiple audio tracks as audio source, you can specify the Stream ID to set it.

Synchronizing sources to audio


Having any source of video which is not in sync with audio, you can bind them together. This might be streams from file content which we described earlier. Also that might be original live streams from different sources which you need to be in sync in order to be played by all major players.

There are 2 major approaches for syncing streams - you can either sync audio to video or sync video to audio. The difference is about which stream will be used a primary source for timing synchronization.

Video to audio



This is the preferred option when you have audio as your main source of content and you need some auxiliary video to be shown. A good example is online radio streamed via Icecast which you want to publish to some delivery service which requires both audio and video in their streams, e.g. CDN or YouTube.

You need to create scenario as shown below.

Binding picture to audio stream

As you see you have audio passthrough for audio and then a decoder input created per "Engaging graphics..." section above. Video pipeline also has scaling filter to make a picture match designated size.
The encoder set up is shown below, check out Sync related streams field. It appears if you click on Expert setup link.

Encoder settings for video made from a picture

Sync related streams field needs to be set to Video to audio value. If you save this encoder and then open audio encoder settings, you will see that this field has also been set up to the same value to avoid sync-up collisions.

You can see audio being passed through, however you can create any other audio pipeline, including filtering etc.

Audio to video


This sync-up can be used when you have a source of soundless video which needs to be accompanied by some audio track. Surveillance camera streaming a good example for this.

The setup is similar - check scenario below as example taken from Audio decoder section above.

Binding audio file to video stream

Here you see video passthrough as video content is not touched and then audio created from file and encoded separately. The encoder setup is shown below.

Encoder settings for audio made from a file

Here you see Sync related streams field set as Audio to video.

Like in previous scenario, you can see video being passed through, however you can create any other audio pipeline, including filtering etc.


"Equalize-only" scenario


Of course you can synchronize video and audio from original live streams. This can be used when you do a voice-over with comments etc. The setup looks like this - just make sure both output streams names are the same.

Passthrough-only scenario to bind audio and video

And either audio encoder or video encoder need to have Sync related streams to either or two mentioned values - Audio to video or vise versa.

Encoder settings to bind audio to video stream.

This is basically how you can easily set up streaming from file sources and make streams synchronization.


Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


Viewing ASN statistics for streaimg connections

$
0
0
A number of our large customers build and maintain their own media content deliver networks. Common layout includes origin servers to process the content from its sources and edge servers which handle connections from end-users who watch and listen the media.

It's important to locate edges as close to possible viewers as possible to reduce latency and improve overall user experience. So you need a way to determine optimal physical location for each edge. This is why it's important to know what ASNs your viewers have. That will allow putting your edges on a proper hosting location with proper network peers.

WMSPanel allows showing ASN statistics of your views showing how many connections were made with from the most active ASNs. It's part of our media servers reporting framework.

Go to Reporting menu and click on ASN stats.


You'll see a page with no stats because you need to enable this feature for those servers which you need it for. This is disabled for all servers because ASN is the metric you may not need until some point. To enabled it, click on Enable ASN statistics link. This will show a dialog where you'll pick the required servers.



Once it's enabled, the data is started to be collected. When a few days pass you'll see a picture like the one below.




Here you see a chart with circles to have an overview of what your ASNs proportions look like. The bigger is the circle, the bigger is the share or each ASN in your server audience. Then you also have a raw data table for further analysis.

Notice that ASN is a premium feature, you can refer to your account subscription settings page to see full details and disable it for some servers if you need.

That's it. Contact us if you need any help or if you have any feedback.


Related documentation


WMSPanel reporting feature set


Handling live streams timing errors in Nimble Streamer DVR

$
0
0
Sometimes when an MPEG-TS stream is received from media source, it may have some glitches either in video or audio. This is caused by third-party encoders which set incorrect time stamps assigned to media fragments - they may go back and forth in some un-predicted range. This also happens even when the source stream is transmuxed into other protocols, e.g. RTMP.

This may bother the viewers and also cause media servers to malfunction during the recording of the stream. Nimble Streamer allows compensating those timing issues and perform correct recording of video and audio in DVR. If the compensation can't help, then Nimble just removes the chunk and resets recording period.

Go to Nimble Streamer top menu, select Live Streams Settings menu and select DVR tab to open its settings.

Choose the designated stream properties and find Error correction section and check Drop invalid segments checkbox. This will perform the required correction to the recorded media, and the playback will be smooth from player point of view.

DVR setup with invalid segments handling



If you have any further questions, contact our team.

Related documentation


Forward CEA-708 subtitles with Nimble Streamer

$
0
0
Providing subtitles as part of live streaming is important and is required by law is some countries. So people ask adding that capability into Nimble Streamer in addition to VOD subtitles support.

There are cases when source stream which comes into Nimble Streamer already contains subtitles metainformation. So now Nimble allows forwarding CEA-708 subtitles. This means that all outgoing streams for all supported protocols will include subtitles.

This works for both transmuxing and transcoding.

Transmuxing support this forwarding by default. Whatever metainformation is inserted into the original stream, it is passed through to all other protocols.

To make this work in Live Transcoder scenarios, you need to enable this feature for outgoing streams. It's a premium add-on for this media server and it has easy-to-use web UI to control transcoding behavior. To install and get a license for it, visit this page.
To enable this feature fr particular encoded stream, you need to edit an encoder block for the stream which you want subtitles to be forwarded for.

Transcoder scenario

Click on encoder details icon to open encoder details dialog.

Enable "Forward CEA708 subtitles" for encoder

Check the Forward CEA-708 subtitles box and save settings to close the dialog. Then click Save on scenario age to apply it on server.

That's it - the forwarding will start working right after a scenario is saved on server.


Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation



FDK AAC encoder and decoder in Nimble Transcoder

$
0
0
Live Transcoder for Nimble Streamer has full support for AAC decoding and encoding, along with various audio filters like re-sampling, transrating or audio channels manipulations.

Now we add FDK AAC support for both decoding and encoding. It allows adding HE-AAC and HE-AACv2 to your transcoding scenarios. This is also another alternative to ffmpeg decoder for audio streams, while having decent quality.

Let's see how you can set up FDK usage in your scenarios.

First, create a new scenario or modify existing one. If you need to perform only audio transformation, you can add a passthrough for video stream.
Minimum scenario for audio transformation.
As mentioned, you can use FDK for both decoding and encoding. Here is how decoder will look like in this case:

Using FDK as decoder.
So you just select libfdk_aac at Decoder drop-down list instead of Default.

If you'd like to encode using libfdk, open encoder dialog and choose libfdk_aac from Encoder drop-down list.

Using FDK as encoder

This also allows you to select HE-AAC and HE-AACv2 profiles. Print "profile" in property edit box to get drop-down list of profiles:
  • aac_low
  • aac_he
  • aac_he_v2
  • aac_ld
  • aac_eld

Choose aac_he or aac_he_v2 for respective options.


Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation



Stress-testing NVidia GPU with IBM

$
0
0
Recently we finished extensive testing of latest NVidia Tesla M60 graphic card in IBM Bluemix Cloud Platform to see how much it increases the performance of Live Transcoder for Nimble Streamer.

We got excellent results, please read this article for more details:

Stress-testing NVidia GPU for live Ttranscoding

Nimble Streamer on IBM Power8 platform

$
0
0
Nimble Streamer media server is being developed as a native application for all popular platforms. You can see this in the full list of supported OSes. It also supports basic architectures available at the majority of hosting providers.

Today we add support for a new platform, POWER8 by IBM, a family of symmetric multiprocessors. Both Nimble Streamer and Live Transcoder were ported so you can use full capabilities of our products on this platform, including live streaming, VOD, DVR and build delivery networks of any kind.

Check installation instructions for Ubuntu to proceed with deployment. Only Ubuntu 14.04 is currently supported at the moment.

Nimble Streamer can be potentially ported and embedded to any platform or OS, so feel free to contact us in case you have some special cases.

Related documentation


Nimble Streamer, Live Transcoder,



VA API (libVA) support in Nimble Streamer

$
0
0
Video Acceleration API (VA API) is a royalty-free API along with its implementation as free and open-source library (libVA). This API provides access to hardware-accelerated video processing, using hardware such as graphics processing units (GPU) to accelerate video encoding and decoding by offloading processing from CPU.

Nimble Streamer supports VAAPI and allows using libVA in Live Transcoder as one of the options for encoding among other libraries and SDKs.

Let's see how you can start using libVA in Nimble Streamer Live Transcoder.


Open your transcoding scenario or create a new one.

Sample scenario
Click on encoding block "gear" icon for open details dialog.

Encoder settings dialog with vaapi as Encoder
Here you need to choose "vaapi" option from "Encoder" drop-down. Now you can fill in library-specific parameters like profile etc. Once you save encoder settings and save the scenario, libva will start working.

Check the description of all supported parameters below.

profile

Specifies the codec profile. The values are:

  • high (this one is default)
  • main
  • contstrained baseline

level

Specifies the codec level (level_idc value * 10).
Default: 51 (Level 5.1, up to 4K30)

g, keyint

Number of pictures within the current GOP (Group of Pictures).
1 - only I-frames are used.
Default: 120

bf

Maximum number of B frames between non-B-frames.

  • 0 - no B frames (default)
  • 1 - IBPBP...
  • 2 - IBBPBBP... etc.


rate_control

Sets bitrate control methods.

  • cbr - Use the constant bitrate control algorithm. "bitrate", "init_bufsize", "bufsize", "max_bitrate" - might be specified.
  • cqp -  Use the constant quantization parameter algorithm; "qpi", "qpp", "qpb" might be specified.

Default: cbr if bitrate is set, cqp otherwise.

b, bitrate

Maximum bit-rate to be constrained by the rate control implementation. Sets bitrate in kbps.
Must be specified for cbr.

target_percentage

The bit-rate the rate control is targeting, as a percentage of the maximum bit-rate for example if target_percentage is 95 then the rate control will target a bit-rate that is 95% of the maximum bit-rate.
Default: 66%

windows_size_ms

windows size in milliseconds. For example if this is set to 500, then the rate control will guarantee the target bit-rate over a 500 ms window.
Default: 1000

initial_qp

Initial QP for the first I frames, 0 - encoder chooses the best QP according to rate control;
Default: 0

min_qp

Minimal QP frames, 0 - encoder chooses the best QP according to rate control;
Default: 0

bufsize

Sets the size of the rate buffer in bytes. If is equal to zero, the value is calculated using bitrate, frame rate, profile, level, and so on.

init_bufsize

Sets how full the rate buffer must be before playback starts in bytes. If is equal to zero, the value is calculated using bitrate, frame rate, profile, level and etc.

qpi, qpp, qpb

Quantization Parameters for I, P and B frames, must be specified for CQP mode.
It's a value from 1…51 range, where 1 corresponds to the best quality.
Defult: 0

quality

Encoding quality - higher is worse and faster, 0 - use driver default.
Default: 0

fps_n, fps_d

Set output FPS numerator and denominator. It only affects num_units_in_tick and time_scale fields in SPS.

  • If fps_n=30 and fps_d=1 then it's 30 FPS
  • If fps_n=60000 and fps_d=2002 then it's 29.97 FPS

Source stream FPS or filter FPS is used if fps_n and fps_d are not set.




Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


The State of Streaming Protocols - 2017 Q1

$
0
0
Softvelum team which operates WMSPanel reporting service continues analyzing the state of streaming protocols.

First quarter of 2017 has passed so let's take a look at stats. The media servers connected to WMSPanel processed more than 10 billion connections from 3300+ media servers (operated by Nimble Streamer and Wowza) during past 3 months.

First, let's take a look at the chart and numbers:

The State of Streaming Protocols - 2017 Q1



You can compare that to the picture of 2016 Q4 protocols landscape:

The State of Streaming Protocols - 2016 Q4

In the 4th quarter of 2016 it had being collected from 3200+ servers.

What can we see?

  • HLS is still stable at ~3/4 of all connections, its share is 70%.
  • RTMP is at the same level and even increased its share to 12%. Low latency streaming use case still need this protocol.
  • Progressive download is 3rd popular at 6%.
  • MPEG-DASH overcame HDS, Icecast and MPEG-TS going up by nearly 3 times by views count - it's now 5th popular protocol.
  • RTSP and Icecast kept their shares.

So MPEG-DASH is the only protocol which was visibly improving. You can also check December summary of steaming protocols.

We'll keep analyzing protocols to see the dynamics. Check our updates at FacebookTwitter or Google+.

2017Q1 news

$
0
0
First quarter of 2017 was full of updates for all major products of our company.

Before telling the news, we'd like to mention that our company CEO and CTO will be visiting 
NAB Show 2017 this April.
If you'd like to meet us and talk about our products, plans or anything around - just drop us a note so we could schedule proper time slot.


Also, check out the State of Streaming Protocols for Q1 of 2017 is available with MPEG-DASH going up.

Nimble Streamer


Our software media server and its transcoder got a number of important updates.

We've ported Nimble to IBM POWER8 architecture. It's a good addition to traditional x64 and ARM which were supported before.

Speaking of hardware, we ran an extensive testing of latest NVidia Tesla M60 graphic card in IBM Bluemix Cloud Platform to see how much it increases the performance of Live Transcoder for Nimble Streamer. We got excellent results, read this article for full details.

Live Transcoder now uses two more coding libraries in addition to already supported ones:

Video and audio can also be binded together in case they come from un-synced sources. Read this article for details. Those un-synced sources may be video and audio files - our transcoder is now capable of producing live streams from them. Same article describes how this can be done. You can also check our videos which illustrate this process.

CEA-708 subtitles forwarding is now available in Nimble Streamer for both transmuxing and transcoding.

Handling live streams timing errors compensation for DVR was added as well.

A couple of updates for our protocols processing engine:

  • RTSP can now be take over HTTP using VAPIX.
  • MPEG-TS processing was enhanced by adding mux rate. We've also added a brief troubleshooting section in the corresponding article to make sure our customers can overcome typical issues related to UDP delivery. Read this article for more details.

New WMSPanel statistics


ASN viewers count metric is now available in WMSPanel. It will be useful for those companies that want to build delivery networks with better latency and user experience.


Larix Mobile Broadcasting SDK


Mobile SDK have continuously been improved. You can find all latest versions' descriptions in SDK release notes.


Android and iOS

Both platforms had the following updates:

  • Multiple connections streaming. You can add several connections profiles and choose up to 3 connections for simultaneous streaming. You can stream to several destinations like your primary and secondary origin servers and also target it to some third-party service like YouTube Live or Twitch.
  • Limelight authentication is available. You can publish your streams directly into Limelight CDN for further delivery.
  • Streaming and user experience improvements.


Windows Phone

We've added various updates to Windows Phone application to make it up-to-date with the fixes on other platforms.

As always you can find the latest releases of Larix Broadcaster streaming app in AppStore, Google Play and Windows Store.




The next quarter will bring more features so stay tuned. Follow us at FacebookTwitter or Google+ to get latest news and updates of our products and services.



Emergency streams hot swap via Live Transcoder

$
0
0
Many of our customers requested us to support hot swap capabilities for live streaming scenarios.

This basically refers to use cases when there is some original (primary) stream active all the time, some substitute (secondary) stream to swap on some occasions, and either of the following actions needs to be done:

  • Emergency stream activation. The substitute stream is usually inactive but when it goes live, it must replace the original stream. Once the secondary stream becomes inactive, the original stream must be continued.
  • Failover stream. The substitute stream is active and once the original stream goes down, the data from substitute stream is taken. Once the original stream is back online, it replaces the substitute stream again.

Both cases require to make the swap smoothly from viewers point of view, which means a player must not stop the playback. Nimble Streamer handles both cases correctly. This article covers the first case.

Emergency stream hot swap is used for several cases, and the most notable example is the support of United States Emergency Alert System (EAS). It requires a broadcaster to replace any media with the content provided by EAS once it becomes available.

Let's take a look at the setup process of this capability.

Pipeline overview


There are two major points where we need to look at in order to understand this swapping process.

First one is about the decoding or original stream. To make streams swap smoothly with no glitches, our Live Transcoder needs to substitute the original content with new one. This can be done only during the decoding process where the encoded content is already "unfolded" but it hasn't yet been transformed by filters on further encoder. So the original stream cannot be passed through or just transmuxed.
The original stream which needs to be substituted, must always be decoded even if you encode it right after that without putting any filters on it. 
Second point is that original and substitution steams need to match in terms of viewer's perspective.
Both streams need to have equal video resolution and audio sample rate.

That being said, you will need to make a bit of a preparation for both original and substitute streams to make them be swapped properly.

Substitute stream setup


To prepare the stream you need to create transcoding scenario. Take a look at this sample one:


Video is decoded then scaled to the resolution of the original stream:


Audio should also be re-sampled to change the sample rate to the same value as in the original stream using "aformat" filter:


If two streams happen to have different audio sample rates, the sound will glitch at the moment of switching between them.

For more details on transcoding setup and usage, check our blog posts and also a set of short videos showing the process.

Now the substitution stream is ready and you can use it in original stream.

Original stream setup


As was mentioned earlier, the original stream must have all input streams to be decoded. Even if you want the stream to be passed-through, you need to add a decoder and an encoder blocks.

However in most time you need some transcoding to be done, here's one typical example. Notice the decoding for audio streams. In normal use case they should have been passed through but here you see teo sets of decoding and encoding with no filtering.


Now you're ready to set up the swapping.

Hot swap settings


Now go to transcoders main page and click on Hot swap settings button.


New page will have just a New hot swap setting button.



Set up emergency for exact stream


Click on it to see a dialog for defining streams. Its pretty self-explanatory as you can tell.


Enter the original application and stream names, then Substitute application and stream names, and select Substitution type as Emergency.

Upon saving the setting, you will see it in the table.

Emergency substitution for specific stream


Set up emergency for all applications or streams


Besides specifying exact app and stream names, for emergency substitutes you may specify only an app or even not specify any original app or stream at all. In this case, all streams in the app or in general will be replaced upon receiving the incoming emergency stream.

Of course, you will still need a transcoding scenario for all original streams that are planned to be replaced. Wildcard streams in transcoder will not work for this case.

Emergency substitution for all streams in the application
But from hot swap rules perspective this will be just 1 simple rule.



That's it. Now whenever the substitute stream becomes available, it will replace the original stream. And once it's down, the original stream content will become available again.

Check the next article to see how you can use same feature for streaming failover setup.



Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation



Streams failover hot swap via Live Transcoder

$
0
0
Nimble Streamer Live Transcoder now supports hot swap capabilities for live streaming scenarios.

This covers use cases when there is some original (primary) stream active all the time and a substitute (secondary) stream to replace it on some occasions. Those include cases like emergency stream activation and an opposite one like streams failover which is described below.

During stream failover, the substitute stream is always active and once the original stream goes down, the data from substitute stream is taken. Once the original stream is online again, it replaces the substitute stream back.

This feature requires making the swap smoothly from viewers point of view, which means a player must not stop the playback. Nimble Streamer covers this properly.

The setup is similar to emergency stream hot swap. with a few changes.



Pipeline overview


To make streams swap smoothly with no video or audio artifacts, the Live Transcoder needs to substitute the original content with new one. This can be done only during the decoding process where the encoded content is already "unfolded" but it hasn't yet been transformed by filters on further encoder. So the original stream cannot be passed through or just transmuxed.
The original stream which needs to be substituted, must always be decoded even if you encode it right after that without putting any filters on it. 
Second point is that original and substitution steams need to match in terms of viewer's perspective.
Both streams need to have equal video resolution and audio sample rate.

So you will need to prepare both original and substitute streams to make the swap properly.

Substitute stream setup


If your stream's source fails for some reason, you'll need to show something instead. Let's do it like you see on a TV - we'll show a tuning table with some simple audio. The table will be displayed from a static image while the sound will come from an MP4 file's audio track.

To prepare the stream you need to create transcoding scenario. Take a look at this sample one:



As you see from decoder block, the video track is created using an image file. The decoder settings would be as follows:



Then you need to scale it to the resolution of the original stream. So just create a scale filter with proper settings.



The audio can be taken from a file as well. The decoder is set like the following.



Audio should also be re-sampled to change the sample rate to the same value as in the original stream using "aformat" filter:


If two streams happen to have different audio sample rates, the sound will glitch at the moment of switching between them.

In addition to that, video and audio must be synchronized.

Set up sync mode for video and audio encoders

For more details on creating streams from files and syncing them, check this article.


Now the substitution stream is ready and you can use it in original stream.

Original stream setup


The original stream must have all input streams to be decoded. Even if you want the stream to be passed-through, you need to add a decoder and an encoder blocks.

In case you need some transcoding to be done, here's one typical example. Notice the decoding for audio streams. In normal use case they should have been passed through but here you see two sets of decoding and encoding with no filtering.


Now let's set up the hot swap itself.

Hot swap settings


Now go to transcoders main page and click on Hot swap settings button.


New page will have just a New hot swap setting button.


Click on it to see a dialog for defining streams.



Enter the original application and stream names, then Substitute application and stream names, and select Substitution type as Failover.

Upon saving the setting, you will see it in the table and it will start working in a few seconds.



That's it. Now whenever the original stream becomes un-available, it will be replaced by the substitute stream. And once original stream is up again, its content will be transmitted.

Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


Pulling HLS streams to process

$
0
0
Nimble Streamer has a transmuxing engine which allows taking any stream of any transport protocol in and generate streams in all available protocols as well. You can check our Live Streaming feature set to see full list.
Usually our customers use RTMP, RTSP or MPEG-TS to deliver streams to media server and create outgoing streams that can be consumed by end-users. However, there are cases when only HLS streams are available as a source. So now Nimble Streamer also supports HLS as an input.

The setup is similar to MPEG-TS input setup and uses the same approach of input and output streams. Let's see how you can set this up.

Go to Nimble Streamer -> Live Streams Settings top menu, select required server and choose MPEGTS In tab as shown below.



Here you can add several types of streams as input. Click on Add HLS stream button.


Here you will enter your HLS playlist Stream URL and will also specify Alias which will be used in other settings as a name for this incoming stream.

If you playlist contains several chunk lists, eg. in case you have ABR stream, then only the first chunk list will be processed.

Once you save it, click on MPEGTS Out tab to set up outgoing stream.


Here you need to click Add outgoing stream to see the dialog below.


Here you will specify Application name and Stream name which will be used for naming of your outgoing result stream. Also, as in case of MPEGTS streams setup, you will need to pick up your input stream as Video source and Audio source for this stream. 

After saving this new stream settings, you will see it being synced up with your server.



Once it's complete, you can go to Nimble Streamer -> Live streams menu to select your server Outgoing streams and check your stream output. It will be available via the protocols which you specified in Global tab for your server or in Applications tab for individual application. These settings allow to define how your stream will be handled.

That's it, you can now use your outgoing stream.


Contact us
if you have any feedback or questions regarding Nimble Streamer functionality and usage.

Related documentation


Encoding to MP3 with Nimble Streamer LIve Transcoder

$
0
0
MP3 audio format is widely used besides AAC. In April of 2017, the last patent related to MP3 expired. So it can now be used with no royalties or other limitations as part of audio processing and transmission scenarios.

Our Live Transcoder now allows taking various audio formats as input, such as

  • AAC
  • MP3
  • MP2
  • Speex
  • PCM G.711 (a-law and μ-law)

The output audio formats now are

  • AAC
  • MP3

So you can now easily transcode AAC to MP3. You can also apply various audio filters like re-sampling, transrating or audio channels manipulations.

This can be used as part of Nimble Streamer audio streaming capabilities.

Let's take a look at a simple scenario which takes incoming video+audio stream and transcodes audio into MP3, regardless of its original audio format.



As you see we've set video channel to pass-through. Audio channel is decoded with default software decoder and then encoded again. Here is the encoder settings dialog.


You set Encoder field to "lame_mp3" - this is what  we use for encoding - and the Codec field is automatically set to MP3.

That's it. Save encoder settings, then save scenario and it will be applied to your transcoder instance. The specified source stream will be transcoded as defined.


LAME

This software uses code of LAME MP3 licensed under the LGPL and its source and build script can be downloaded here.


Feel free to visit Live Transcoder webpage for other transcoding features description and contact us if you have any question.

Related documentation


FAQ: Why are you better than your competitors?

$
0
0
Our trial users often ask us similar questions like "How are your solutions better than competitors' products? What are your benefits and advantages over products X, Y or Z? Why should I choose you instead of competitors?"

So basically the question is "Why do you think you are better than your competitors?"

The short answer is simple: You tell us why.

OK, let us give you more detailed answer.

Your project needs tools for media streaming, such as a media server, or a transcoder, or a mobile broadcasting solution. So you make a list of your own requirements and your perfect solution must comply with them.

The next step for you is to make a list of solutions you'd like to try before making your choice. Each product category has several candidates these days but the list will not be huge anyway.

Now the real work starts. You should install and try every candidate solution you find proper. Yes, install it, set it up and run your own test use cases and scenarios.
You should take a look at these areas of expertise:

  • Feature set. This is what you're actually looking for the most. All solutions on the market have ~80% of their functionality be the same. Some solutions are unique in their own area of expertise - and that 20% difference might be something needed for your project. And of course you need to check every feature you want to use, don't trust marketing materials.
  • Cost of ownership. Both CAPEX and OPEX should be considered, including license cost, hardware cost, consultants pricing etc. You must know your total expenses and revenues better than anyone, so use simple math instead of salesforce to help you.
  • Support and documentation. You should carefully check how each company support team handles your requests. You may need their help in future so need to be sure they will help accurately and on time. You don't want that documentation to be outdated and you expect it to help, not to confuse. Ask questions to support team in case if docs cannot clarify some points. 
  • Legal questions. Make sure the selected products have appropriate license agreements for used patents and technologies. Like we have. You don't want to find yourself in the middle of a lawsuit for patent infringements.

All these points matter, so you should carefully check them.

If you ask anyone about their solutions instead of trying yourself, you will probably be told lots of good things but no one's advice can compare with your own experience.

Trust your conclusions, don't listen to anyone, make your own decisions and decide what's best for you.

Hopefully we answered your initial question.

If you still have something to ask, just contact us.


Introducing SLDP

$
0
0
Low latency has been a trending issue during past years as Flash and RTMP are being deprecated by the industry. So media streaming companies are trying different options to handle real-time video and audio delivery. Our company are getting requests about providing the solution which might address this concern.

We introduce SLDP: Softvelum Low Delay Protocol.

It's a last-mile delivery protocol to end-user devices at multiple platforms. SLDP is based on WebSockets.

It's basic capabilities are:
  • Sub-second delay between origin and player.
  • Codec-agnostic to play whatever your end-user platform has - H.264, VP9, HEVC.
  • ABR support with switching channels taking just a GOP time and each channel may use its own codec.
  • HTTP and HTTPS on top of TCP: use it with any network settings.
SLDP HTML5 playback is available in MSE browsers via light-weight JavaScript player. It works fine in Chrome, Firefox, Safari, Chromium and Microsoft Edge on the hardware platforms which have that support, like x64 or Android.

The mobile SDKs coming soon to cover iOS and Android devices.

As for server side, at the moment SLDP transmission is available Nimble Streamer. It's just another protocol output supported by our server. Any supported live protocols can be used as an input.

Feel free to try it in action right now by installing or upgrading your Nimble instance and selecting SLDP in live streams settings besides existing methods.

Please visit SLDP website and contact us in case of any questions as we're moving on with SLDP so your feedback is appreciated.


Related documentation


Viewing all 436 articles
Browse latest View live