upvote
Displaying an image on the screen is not that difficult a task. Linux has framebuffer device files. You open them, issue an ioctl to get metadata like screen geometry and color depth, then mmap the framebuffer as an array of pixels you can CPU render to. It's eerily similar to the way terminal applications work.

It's also possible to use Linux KMS/DRM without any user space libraries.

https://github.com/laxyyza/drmlist/

The problem with hardware accelerated rendering is much of the associated functionality is actually implemented in user space and therefore not part of the kernel. They unfortunately force the libc on us. One would have to reimplement things like Mesa in order to do this. Not impossible, just incredibly time consuming.

Things could have been organized in a way that makes this feasible. Example: SQLite. You can plug in your own memory allocation functions and VFS layer. I've been slowly porting the SQLite Unix VFS to freestanding Linux in order to use it in my freestanding applications.

reply