[MPlayer-dev-eng] Libvo2 draft

Arpi arpi at thot.banki.hu
Thu Dec 6 00:10:51 CET 2001


Hi,

> I'm gonna put this text as attachment to force you to
> copy only the parts you want to reply;)
no problem, my mailer includes text atatchments as normal mail text :)

btw, next time please find the key called ENTER and press it more times :)
i don't like to read 173572 char long lines :(

btw text is commited to CVS, DOCS/tech/

so you and others can/should keep it up-to-date there!

>   update_surface - as in the note above, this is draw function. Why I change it's name? I have 2 reasons, first I don't want implementation like vo1, second it really must update video surface, it must directly call the system function that will do it. This function should work only with slices, the size of slice should not be limited and should be passed (e.g ystart, yend), if we want draw function, we will call one form libvo2 core, that will call this one with start=0; ymax=Ymax;. Also some system screen update functions wait for vertical retrace before return, other functions just can't handle partial updates. In this case we should inform libvo2 core that device cannot slice, and libvo2 core must take care of the additional buffering.
hmm. good idea for the case when surface is created by codec (internal
buffers) not libvo2. but how do you want to make difference between drawings
with update_surface and get_surface ?
i think get_surface and flip_image (really display_surface) are mandatory,
and update_surface or rendering directly to the surface si optional.

>   hw_decode - to make all dvb,dxr3, tv etc. developers happy. This function is for you. Be careful, don't OBSEBE it, think and for the future, this function should have and ability to control HW IDCT, MC that one day will be supported and under linux. Be careful:)
yeah.

>   subpicture - this function will place subtitles. It must be called once to place them and once to remove them, it should not be called on every frame, the driver will take care of this. 
>     Currently I propose this implementation: we get array of bitmaps. Each one have its own starting x, y and it's own height and width, each one (or all together) could be in specific imgfmt (spfmt). THE BITMAPS SHOULD NOT OVERLAP! This may not be hw limitation but sw subtitles may get confused if they work as 'c' filter (look my libvo2 core).
WHY SHOULDN'T OVERLAP ? It actually DOES overlap now!!!
as it's drawn by characters, and characters has (optional) thin glow/shadow
which will overlap with char at left/right. This way their
glow/shadow will construct a nice outline.

i think we must allow overlapping bitmaps, or solve this problem at libvo2
core - so rendering strings (array of chars) to single bitmaps.

> I would like to hear the GUI developers. Could we separate Mouse/Keyboard from the driver. What info do you need to do it. Don't forget that SDL have it's own keyboard/mouse interface.
I've asked Pontscho (he doesn't understand english well...).
There is 2 option of GUI<->mplayer interface.

The current, ugly (IMHO) way:
gui have the control of the video window, it does handle resizing, moving,
key events etc. all window manipulation in libvo drivers are disabled is gui
is enabled. it wa srequired as libvo isn't inited and running when gui
already display the video window.

The wanted way:
GUI shouldn't control teh X window directly, it should use libvo2 control
calls to resize/move/etc it. But there is a big problem: X cannot be opened
twice from a process. It means GUI and libvo2 should share the X connection.
And, as GUI run first 9and when file is selected etc then libvo2 is started)
it should connect to X and later pass the connection to libvo2. It needs an
extra control() call and some extra code in mplayer.c

but this way gui could work with non-X stuff, like SDL, fbdev (on second
head for TVout etc), hardware decoders (dvb.dxr3) etc.

as X is so special, libvo2 should have a core function to open/get an X
connection, and it should be used by all X-based X drivers and gui.

also, GUI nedds functions to get mouse and keyboard events, and to
enable/disable window decoration (title, border).

we need fullscreen switch control function too.

> Maybe we should allow video driver to change the libin driver ? 
forget libin. most input stuf fis handled by libvo drivers.
think of all X stuff (x11,xv,dga,xmga,gl), SDL, aalib, svgalib.
only a few transparent drivers (fbdev, mga, tdfxfb, vesa) has not, but all
of them are running on console (and maybe on second head) at fullscreen, so
they may not need mouse events. confole keyboard events are already catched
and handled by getch2.

i can't see any sense of writting libin.

mpalyer.c should _handle_ all input events, collected from lirc interface,
getch2, libvo2 etc. and it should set update flags, for gui and osd.

but we should share some plugin code. examples: *_vid code, all common X
code. it can be either implementing them in libvo2 core (and called from
plugins) or include these files from all drivers which need it. later method
is a bit cleaner (from viewpoint of core-plugin independency) but results
bigger binaries...

> query_format -> not usable in this form, this function mean that all negotiation will be performed outside libvo2. Replace or find better name. 
> close -> open/close :)
it was added just for compatibility, and planned to be removed when we
updated mlayer code to use libvo2.

> choose_buffering - all buffering must stay hidden. The only exception is for hw_decode. In the new implementation this functions is not usable.
i think it's very important to implement in core.
it will interface and implement the compatibility layer between codecs and
libvo2 drivers. it should find the optimal buffering of driver (number and
type of buffers) and codec (direct/indeirect rendering, access to internal
buffer or copy by slices etc...) and set up optional software colorspace
conversion and scaling, deinterlacing etc.

at least in my plans :)

> FILTER 1..x - processing:{ c-copy(buff1,buff2), p-process(buff1) }, 

i'm not sure. postprocessing depends on codecs a lot (uses some internal
quant. data of codecs, and some codecs ahve it built-in, line win32 ones and
divx4linux).

>   If we want direct rendering we need normal buffer, no filters, and (at least) 2 video surfaces. (we may allow 'p' filter like subtitles).
it works with one buffer too (but ugly). more over, most codecs only works
with static buffers, so direct rendering is only possible with single
surface. divx4 and directshow works with multiple surfaces too. vfw doesn't.

direct rendering with 1 surface is always faster than 2 or more, so we
should allow it for people with really slow sys. (kabi said that divx is
playable on p166mmx with single buffer direct rendering...)

> DECODER - We always get buffer from the decoder, some decoders could give pointer to it's internal buffers, other takes pointers to buffers where they should store the final image. Some decoders could call draw_slice after they have finished with some portion of the image.
>   type_of_buffer - I take this from the current libvo2 spec.  I call 'I' internal buffer (readonly), 'K' static buffer(one,constant pointer), and 'B' - normal buffer. 
>   slice - this flag shows that decoder knows and want to work with slices.

huh. not so simple.

imho:

1. internal buffers. readonly. can be static (constant pointer) or not
   static (several pointers, usually 2-3 alternating)
   examples: divx4 in odivx mode. libavcodec.
2. single, static buffer (constant pointer, readonly) -> no double buffering
   examples: vfw codecs. native msrle, cram codecs. some xanim codecs.
3. two or more, static buffers (readonly)
   examples: divx4 in direct rendering mode (divx4 interface), directshow
4. temporary buffer (can be overwritten, don't have to be constant ptr)
5. slices: decoder call draw_slice with its small internal buffer many times.
6. mpeg buffering: combination of 3. and 4/5. (2 static and 1 temp buffer)
   decoding and displaying order of frames differ. used by libmpeg2.

method 6 is most complex and need proper driver support (driver should
provide at least 3 (4 for double buffering) fast-read buffers for full direct
rendering, or 1 (2 for double b) temp. buffers (may be slow) for partial
direct rendering. (direct rendering of B frames, indirect (copy) of I/P frames).

to be continued...


A'rpi / Astral & ESP-team

--
mailto:arpi at thot.banki.hu
http://esp-team.scene.hu



More information about the MPlayer-dev-eng mailing list