Hi all,
Just had a question on how to best handle the playback of recorded sessions with very large file sizes. When recording long sessions with many LCCS clients, the zip file we are receiving is rather large. When our playback app seeks to replay these sessions, it takes the LCCS media servers enough time to download the files from the repository server as to make playback in this fashion impractical. If I understand correctly, the media servers will cache the recording, but only for the duration of the room session + ~5 minutes after the playback "room" is vacated (so every time the playback is called when there is not someone already viewing the playback, it must be fetched anew from the repository).
So, do you have any advice on how to best do playback for large recordings?
Thanks,
Davis
Hi Davis,
Your understanding of how the recording playback works is fairly accurate.
That said, we have never encountered as severe of a dowload issue as "to make playback in this fashion impractical." Could you be more specific? In particular, how large are your recording zip files, and how long does it take to download them (or rather how long does it take for you to see playback activity on the client)?
Thanks,
Nikola
Views
Replies
Total Likes
Hi Nikola,
So our files are > 100MB (sometimes up to half a gig), and it takes the playback app on the order of minutes to download them. Our repo server is just a simple rackspace instance, so perhaps it would be possible to optimize the download time to a degree, but with file sizes > 100 MB it will still take quite a while to download.
Thanks,
Davis
Views
Replies
Total Likes
Hi Davis,
One way to "slice" this (literally!) is that you could actually make a
succession of recordings for the same real-time event. At playback time, you
could drive through each recording in sequence - any given recording would
be a lot smaller.
nigel
Hi Nigel,
Thanks for the suggestion. We had actually considered doing something like this, and may yet end up going this route. However, it seems that we cannot have multiple recordings of a room going on simultaneously. So if we wanted to produce one recording that sends a zip file with the full session, and then multiple shorter recordings with bits of the session, we are unable to do this. While it's not essential to have the full uninterrupted session in a single zip, it would be nice.
Another (perhaps preferable) option would be to compound the stream files ourselves in some video editing software. It seems we can stitch together the bits and pieces we need out of our large session recording files in final cut pro or similar software.
The issue here is the timing. So, is there some way we can 'decipher' the offsets of the stream files we receive so we might figure out how to layer them ourselves? In the zip file that we are sent on the completion of a recording there appear to be two files that look promising: __PacingStream.flv and __StreamOffsets.flv. Is there some way we can process these so as to get a readable timeline of the start points for the various video and audio streams?
I could probably build a playback app that transcribed most this information based on StreamReceive and CollectionChange Events, but I'm not sure how I would match this information up with the file names of the streams (flv files) in the zip. So, is there some convenient way we can make use of __PacingStream.flv or __StreamOffsets.flv (or whichever other files might be relevant)?
Thanks,
Davis
I'm also interested in this.
Views
Replies
Total Likes
I believe I'm having similar issues. The playback application will not load long streams. These may be very large (197M), but I'm hoping for a solution that would enable the playback of these fully without taking 'slices' like I have seen in closed circuit security cam software in the past. Please let me know if there is anything we can implement after the recordings are in the repository to allow streaming or download.
Thanks!
-Eric
The following is from my playback application:
07:36:15 GMT-0500 protocols: [object ProtocolPortPair],[object ProtocolPortPair]
07:36:15 GMT-0500 [attempt 1 of 2] Connecting to 0/1: rtmfp://fms6.acrobat.com/playback/na2-sdk-e80bc473-6f16-41ac-adcb-67e01950cd72/10252011045542703/session #startProtosConnect#
07:36:15 GMT-0500 tempNetStatusHandler 0/2,NetConnection.Connect.Success
07:36:15 GMT-0500 isTunneling? false
07:36:15 GMT-0500 is using RTMPS? false
07:36:15 GMT-0500 #SessionManagerPlayback 113503 fms connected: [Event type="connected" bubbles=false cancelable=false eventPhase=2]
07:36:16 GMT-0500 #SessionManagerPlayback 113635 receiveLogin:
07:36:16 GMT-0500 . [object]
07:36:16 GMT-0500 \\
07:36:16 GMT-0500 .options [object]=
07:36:16 GMT-0500 .descriptor [object]= [object Object]
07:36:16 GMT-0500 .ticket [string]= mugtvgq1brqn
07:36:16 GMT-0500 #SessionManagerPlayback 113637 ======= onConnected: play __PacingStream
07:36:16 GMT-0500 #SessionManagerPlayback 113835 ======= onMetadata: play __StreamOffsets
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/default_WebCamera, 83
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/FileManager, 118
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/AVManager, 156
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/RoomManager, 200
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/UserManager, 239
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/E873CE4B-585B-37A7-C7DD-3D18C4448BAC, 273
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/33EDB046-708D-BC33-E7D3-3D1A5AEDE459, 283
07:36:16 GMT-0500 #SessionManagerPlayback 114001 ======= receiveStreamOffset session/DAD5701F-2655-0CC4-9024-3D18C1A4FE0F, 306
07:36:16 GMT-0500 #SessionManagerPlayback 114002 ======= receiveStreamOffset session/229C710A-100C-2C66-3BBC-3D1A5A2F23B2, 318
07:36:16 GMT-0500 #SessionManagerPlayback 114002 ======= receiveStreamOffset session/ABC269D9-ADC4-2CE9-7CDD-3D301AA19559, 1423972
07:36:16 GMT-0500 #SessionManagerPlayback 114002 ======= receiveStreamOffset session/A91BFDF2-802E-CBF3-B704-3D3991251EB0, 2031576
07:36:16 GMT-0500 #SessionManagerPlayback 114002 ======= receiveStreamOffset session/09F6623E-ED16-8E48-F8AF-3D5E34C6AE0D, 4433042
07:36:16 GMT-0500 #SessionManagerPlayback 114052 ======= receiveStreamOffset session/6EDE1CBB-6594-6F76-4005-3D604052AC4A, 4566785
07:36:16 GMT-0500 #SessionManagerPlayback 114052 ======= onOffsetStreamPlayStatus
07:36:16 GMT-0500 . [object]
07:36:16 GMT-0500 \\
07:36:16 GMT-0500 .duration [number]= 0
07:36:16 GMT-0500 .code [string]= NetStream.Play.Complete
07:36:16 GMT-0500 .level [string]= status
07:36:16 GMT-0500 .bytes [number]= 1556
07:36:16 GMT-0500 #SessionManagerPlayback 114171 realReceiveLogin:
07:36:16 GMT-0500 . [object]
07:36:16 GMT-0500 \\
07:36:16 GMT-0500 .descriptor [object]= [object Object]
07:36:16 GMT-0500 RECEIVED LOGIN AT SESSION
07:36:16 GMT-0500 .user descriptor from server [object]
07:36:16 GMT-0500 \\
07:36:16 GMT-0500 .affiliation [number]= 10
07:36:16 GMT-0500 .userID [number]= 0
07:36:16 GMT-0500 #SessionManagerPlayback 114174 ======= subscribeCollection root
07:36:16 GMT-0500 #SessionManagerPlayback 114174 ======= subscribeCollection __RootCollection - play
It stops at play, and then picks up with invalid ticket (I'm assuming there is a timeout?)
Views
Replies
Total Likes
Hi,
I haven't yet looked to deeply into it, but I'd still be interested to know if there is a practical way for us to use __PacingStream.flv or __StreamOffsets.flv (or whichever other tools the recording package provides) in order to match up the streams to the stream file names included in the zip, so that we might build our own composite clips with multiple video and audio files in final cut (or any other video editing software).
Is there anyway to do this?
Thanks,
Davis
Views
Replies
Total Likes
Hi Davis,
I'm including a basic app which will help you inspect messages in FLVs. Make an AIR project and include the files in this zip. From there, run the app and you should be able to select __StreamOffsets.flv, and it will tell you the relative start times of every stream in the recording.
hope that helps - note, this code is provided AS IS, etc, etc, YMMV.
nigel
Hi Nigel,
Thank you very much!
Cheers,
Davis
Views
Replies
Total Likes
Hi LCCS Team:
I have been able to create a custom playback application that uses the FLVPlayback class and reads the .flv from the repository. I also compiled and ran the "As Is" FLVInspector code to find the offset info and media type. Right now my demo trys to play the flv files that exist in the archive, but it's ugly when it plays them unsynchronized. The main gap now with this approach is runnning the inspector when there is a new recording present to pull the offsets and play them accordingly. Is there a way to run an air application like a script that is executed with a couple parameters? Might be able to trigger this to happen on new recordings?
Unfortunately the same issue exists for us even on smaller recordings. We are limiting the length to 15 mins, and the it still takes 2:30 for the GET command to show in the access_log. After this period, the client application never jumps into playback. It never fires creationComplete on the session. Reloading the page and trying again works like a charm though!
Thanks for any helpful suggestions,
-Eric
Views
Replies
Total Likes
Views
Likes
Replies