|
|
Subscribe / Log in / New account

LPC: Life after X

This article brought to you by LWN subscribers

Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.

By Jonathan Corbet
November 5, 2010
Keith Packard has probably done more work to put the X Window System onto our desks than just about anybody else. With some 25 years of history, X has had a good run, but nothing is forever. Is that run coming to an end, and what might come after? In his Linux Plumbers Conference talk, Keith claimed to have no control over how things might go, but he did have some ideas. Those ideas add up to an interesting vision of our graphical future.

We have reached a point where we are running graphical applications on a wide variety of systems. There is the classic desktop environment that X was born into, but that is just the beginning. Mobile systems have become increasingly powerful and are displacing desktops in a number of situations. Media-specific devices have display requirements of their own. We are seeing graphical applications in vehicles, and in a number of other embedded situations.

Keith asked: how many of these applications care about network transparency, which was one of the original headline features of X? How many of them care about ICCCM compliance? How many of them care about X at all? The answer to all of those questions, of course, is "very few." Instead, developers designing these systems are more likely to resent X for its complexity, for its memory and CPU footprint, and for its contribution to lengthy boot times. They would happily get rid of it. Keith says that he means to accommodate them without wrecking things for the rest of us.

Toward a non-X future

For better or for worse, there is currently a wide variety of rendering APIs to choose from when writing graphical libraries. According to Keith, only two of them are interesting. For video rendering, there's the VDPAU/VAAPI pair; for everything else, there's OpenGL. Nothing else really matters going forward.

In the era of direct rendering, neither of those APIs really depends on X. So what is X good for? There is still a lot which is done in the X server, starting with video mode setting. Much of that work has been moved into the kernel, at least for graphics chipsets from the "big three," but X still does it for the rest. If you still want to do boring 2D graphics, X is there for you - as Keith put it, we all love ugly lines and lumpy text. Input is still very much handled in X; the kernel's evdev interface does some of it [Keith Packard] but falls far short of doing the whole job. Key mapping is done in X; again, what's provided by the kernel in this area is "primitive." X handles clipping when application windows overlap each other; it also takes care of 3D object management via the GLX extension.

These tasks have a lot to do with why the X server is still in charge of our screens. Traditionally mode setting has been a big and hairy task, with the requisite code being buried deep within the X server; that has put up a big barrier to entry to any competing window systems. The clipping job had to be done somewhere. The management of video memory was done in the X server, leading to a situation where only the server gets to take advantage of any sort of persistent video memory. X is also there to make external window managers (and, later, compositing managers) work.

But things have changed in the 25 years or so since work began on X. Back in 1985, Unix systems did not support shared libraries; if the user ran two applications linked to the same library, there would be two copies of that library in memory, which was a scarce resource in those days. So it made a lot of sense to put graphics code into a central server (X), where it could be shared among applications. We no longer need to do things that way; our systems have gotten much better at sharing code which appears in different address spaces.

We also have much more complex applications - back then xterm was just about all there was. These applications manipulate a lot more graphical data, and almost every operation involves images. Remote applications are implemented with protocols like HTTP; there is little need to use the X protocol for that purpose anymore. We have graphical toolkits which can implement dynamic themes, so it is no longer necessary to run a separate window manager to impose a theme on the system. It is a lot easier to make the system respond "quickly enough"; a lot of hackery in the X server (such as the "mouse ahead" feature) was designed for a time when systems were much less responsive. And we have color screens now; they were scarce and expensive in the early days of X.

Over time, the window system has been split apart into multiple pieces - the X server, the window manager, the compositing manager, etc. All of these pieces are linked by complex, asynchronous protocols. Performance suffers as a result; for example, every keystroke must pass through at least three processes: the application, the X server, and the compositing manager. But we don't need to do things that way any more; we can simplify the architecture and improve responsiveness. There are some unsolved problems associated with removing all these processes - it's not clear how all of the fancy 3D bling provided by window/compositing managers like compiz can be implemented - but maybe we don't need all of that.

What about remote applications in an X-free world? Keith suggests that there is little need for X-style network transparency anymore. One of the early uses for network transparency was applications oriented around forms and dialog boxes; those are all implemented with web browsers now. For other applications, tools like VNC and rdesktop work and perform better than native X. Technologies like WiDi (Intel's Wireless Display) can also handle remote display needs in some situations.

Work to do

So maybe we can get rid of X, but, as described above, there are still a number of important things done by the X server. If X goes, those functions need to be handled elsewhere. Mode setting is going to into the kernel, but there are still a lot of devices without kernel mode setting (KMS) support. Somebody will have to implement KMS drivers for those devices, or they may eventually stop working. Input device support is partly handled by evdev. Graphical memory management is now handled in the kernel by GEM in a number of cases. In other words, things are moving into the kernel - Keith seemed pleased at the notion of making all of the functionality be somebody else's problem.

Some things are missing, though. Proper key mapping is one of them; that cannot (or should not) all be done in the kernel. Work is afoot to create a "libxkbcommon" library so that key mapping could be incorporated into applications directly. Accessibility work - mouse keys and sticky keys, for example - also needs to be handled in user space somewhere. The input driver problem is not completely solved; complicated devices (like touchpads) need user-space support. Some things need to be made cheaper, a task that can mostly be accomplished by replacing APIs with more efficient variants. So GLX can be replaced by EGL, in many cases, GLES can can be used instead of OpenGL, and VDPAU is an improvement over Xv. There is also the little problem of mixing X and non-X applications while providing a unified user experience.

Keith reflected on some of the unintended benefits that have come from the development work done in recent years; many of these will prove helpful going forward. Compositing, for example, was added as a way of adding fancy effects to 2D applications. Once the X developers had compositing, though, they realized that it enabled the rendering of windows without clipping, simplifying things considerably. It also separated rendering from changing on-screen content - two tasks which had been tightly tied before - making rendering more broadly useful. The GEM code had a number of goals, including making video memory pageable, enabling zero-copy texture creation from pixmaps, and the management of persistent 3D objects. Along with GEM came lockless direct rendering, improving performance and making it possible to run multiple window systems with no performance hit. Kernel mode setting was designed to make graphical setup more reliable and to enable the display of kernel panic messages, but KMS also made it easy to implement alternative window systems - or to run applications with no window system at all. EGL was designed to enable porting of applications between platforms; it also enabled running those application on non-X window systems and the dumping of the expensive GLX buffer sharing scheme.

Keith put up two pictures showing the organization of graphics on Linux. In the "before" picture, a pile of rendering interfaces can be seen all talking to the X server, which is at the center of the universe. In the "after" scene, instead, the Linux kernel sits in the middle, and window systems like X and Wayland are off in the corner, little more than special applications. When we get to "after," we'll have a much-simplified graphics system offering more flexibility and better performance.

Getting there will require getting a few more things done, naturally. There is still work to be done to fully integrate GL and VDPAU into the system. The input driver problem needs to be solved, as does the question of KMS support for video adaptors from other than the "big three" vendors. If we get rid of window managers somebody else has to do that work; Windows and Mac OS push that task into applications, maybe we should too. But, otherwise, this future is already mostly here. It is possible, for example, to run X as a client of Wayland - or vice versa. The post-X era is beginning.

Index entries for this article
ConferenceLinux Plumbers Conference/2010


(Log in to post comments)

LPC: Life after X

Posted Nov 5, 2010 21:16 UTC (Fri) by ttrafford (guest, #15383) [Link]

"tools like VNC and rdesktop work and perform better than native X"

I still fail to see how starting a full desktop session is going to outperform the situation where I just want to run a remote instance of "xeyes". Or "virt-install", or "qmon" for more useful examples.

LPC: Life after X

Posted Nov 5, 2010 21:35 UTC (Fri) by jspaleta (subscriber, #50639) [Link]

I think certain of the remote protocol implementations have the ability to forward a single window, for example seamless RDP.

It would be good if someone knowledgeable about the state-of-the-art available in competing remote protocols could shake their cystalball and peer in the not-to-distance-future of 2 to 4 years from now (the timescale for Wayland dominance) and try to paint a picture of what should be possible. SPICE and RDP being the obvious candidates that leap to mind for me. I think some of us curmudgeony people instinctively think VNC is meant to fill this role because its been the workhorse for many of us for a long time (too long maybe if the old dogs and new tricks adage is true.) But maybe that's the wrong technology to slot in as a puzzle piece here.

-jef

LPC: Life after X

Posted Nov 21, 2010 13:00 UTC (Sun) by 9000 (guest, #71396) [Link]

There's a project aimed to achieve exactly this: http://telepathy.freedesktop.org/wiki/Xzibit

LPC: Life after X

Posted Nov 5, 2010 21:41 UTC (Fri) by drag (guest, #31333) [Link]

Well virt-install is interesting example. I was going to mention that virt-manager and related tools has it's own protocol for remotely administrating machines and supports lots of nice features for setting up administration roles and integration into more enterprisy-environments and that X networking is probably about the least useful, least efficient, and least secure way to remote access libvirt controlled VMs. Hell it even supports accessing Libvirtd over SSH if you want to do that.

But then I realized that you were talking about Oracle's tools, Not Linux/KVM stuff.

:D

LPC: Life after X

Posted Nov 5, 2010 21:56 UTC (Fri) by ttrafford (guest, #15383) [Link]

You were right the first thought, I did actually mean the libvirt tools and apparently I just didn't know about the remote management stuff. That wouldn't actually help me much though since I don't have virt-manager or virt-install on my Mac which is my client-side connection.

As for security, I personally do everything over ssh and let it handle tunneling X.

It's just that "single-application, automatically-handled/forwarded-by-ssh" situation that I hope is continued after the dust settles.

LPC: Life after X

Posted Nov 8, 2010 9:38 UTC (Mon) by mjthayer (guest, #39183) [Link]

> It's just that "single-application, automatically-handled/forwarded-by-ssh" situation that I hope is continued after the dust settles.

Given that the X server is still going to be around in a slightly less privileged position on the stack, those applications will still be able to work as they do today if nothing better is found.

LPC: Life after X

Posted Nov 8, 2010 9:45 UTC (Mon) by mjthayer (guest, #39183) [Link]

> ...those applications will still be able to work as they do today if nothing better is found.

One dodgy thought - what about a remote application embedding a small web server and doing http on stdin and stdout rather than over a socket, so that a local web browser could start the application over ssh?

I dare say of course that if that idea isn't completely useless someone will already have done it.

LPC: Life after X

Posted Nov 12, 2010 6:04 UTC (Fri) by jch (guest, #51929) [Link]

> what about [...] embedding a small web server

http://www.transmissionbt.com/images/screenshots/Clutch-L...

LPC: Life after X

Posted Nov 12, 2010 9:04 UTC (Fri) by mjthayer (guest, #39183) [Link]

>> what about [...] embedding a small web server

> http://www.transmissionbt.com/images/screenshots/Clutch-L...

That is still going over a socket though, or so it looks to me, not stdin and stdout forwarded by ssh.

LPC: Life after X

Posted Nov 12, 2010 6:02 UTC (Fri) by jch (guest, #51929) [Link]

> Given that the X server is still going to be around...

That won't help you much if the application is no longer able to act as an X client.

LPC: Life after X

Posted Nov 12, 2010 8:56 UTC (Fri) by mjthayer (guest, #39183) [Link]

>> Given that the X server is still going to be around...

>That won't help you much if the application is no longer able to act as an X client.

Why shouldn't it be able to? If the X server is around people can write new X clients if it makes sense (although they will probably find other ways to do network forwarding when they start to think about it). If it uses Gtk+ for X it can even blend in seamlessly with the non-X clients.

LPC: Life after X

Posted Nov 12, 2010 10:36 UTC (Fri) by dlang (guest, #313) [Link]

if all the new applications are written to run on X then wayland has no native apps and does no good.

if all the new applications are written to run on wayland, then they cannot be clients for X and the fact that there is still an X server you can run on top of wayland does no good (except for obsolete apps that pre-date wayland)

do you see why people who need network transparency may be opposed to the common development going in a direction that doesn't support it?

LPC: Life after X

Posted Nov 12, 2010 19:14 UTC (Fri) by bronson (subscriber, #4806) [Link]

Your reply implies that it's impossible to layer network transparency on top of Wayland. I doubt that's true.

LPC: Life after X

Posted Nov 12, 2010 19:39 UTC (Fri) by dlang (guest, #313) [Link]

you are right, I am assuming that good network transparency (as opposed to what VNC etc provide) it is going to require some consideration in the design of the windowing system, and since the people working on the windowing system are taking the attitude 'nobody needs network transparency', such consideration is unlikely.

LPC: Life after X

Posted Nov 7, 2010 11:23 UTC (Sun) by jond (subscriber, #37669) [Link]

You only ever run xeyes to confirm x forwarding is working, without x forwarding you wouldn't.

Configure virt-manager properly and run a local virt-install instance...

LPC: Life after X

Posted Nov 10, 2010 11:11 UTC (Wed) by nix (subscriber, #2304) [Link]

Personally I run xlogo for that. :)

LPC: Life after X

Posted Nov 11, 2010 12:31 UTC (Thu) by wtarreau (subscriber, #51152) [Link]

Indeed, VNC has nothing to do with X. I can't seemlessly copy/paste text, moving a windows is horribly slow in VNC, and it's not compatible with the X applications that already run on other systems. It's not the way to replace X. In my opinion, the proper way to do that is to have the network not between X layers but next to them. In short, applications should be able to more or less directly communicate with hardware, and an X network server should be available as any other application (think Xceed and equivalent under other OSes). Dynamic libraries should provide the alternate API needed for local applications to use the X network protocol to render on remote displays.

I can say I'm using X remotely on a daily basis, including between various systems. It would be a big functional loss if networking would simply be removed. That would be one reason to switch to a more open system :-/

Rethinking remote display

Posted Nov 5, 2010 21:48 UTC (Fri) by JoeBuck (subscriber, #2330) [Link]

I don't think that we should just give up on remote transparency, though it might be structured quite differently in the future. Also, I don't think that the VNC approach, which just keeps two bitmaps in sync, is going to be an adequate replacement. We may still find ourselves in a situation where we have a central server with thousands of processors, a remote user with a highly capable GPU, a limited bandwidth connection and complex 3D objects to display (maybe the user is doing mechanical CAD, or experimenting with protein folding as part of drug development).

If we take a model-view-controller view, the model is on the central server and the view/controller portion is partly on both, and we want to communicate manipulations on 3D objects across the network as efficiently as possible. One approach might just be to do OpenGL calls as RPC calls across the network, and use something more X-like to send the user's gestures (mouse clicks, keystrokes, multitouch) in the other direction. But it would be best if done in such a way that applications work over the network by default, without special coding by the application developer, because the framework supports it.

"Just do it in the browser" seems a step backward from the security point of view; the HTTP server isn't running as me.

Rethinking remote display

Posted Nov 5, 2010 22:22 UTC (Fri) by drag (guest, #31333) [Link]

Network transparency kicks ass. It's certainly a killer feature for X.

but there are things that could be better.... Like session management. With Windows and RDP/etc or VNC I can disconnect and reconnect again later without interruption. That's something you cannot do with X (at least the way it is now).

SPICE kicks ass. As far as I can tell it's much faster then even ICA, which blows X and VNC out of the water. I wonder if there was someway to integrate Spice into Wayland.

Like maybe have a Gallium driver that outputs to Spice instead of to VGA out or something crazy like that.

Rethinking remote display

Posted Nov 6, 2010 0:42 UTC (Sat) by Lennie (subscriber, #49641) [Link]

SPICE and Wayland are developed/owned/whatever by the same company RedHat so it should in theory be possible for the developers to talk to each other, right ? ;-)

Rethinking remote display

Posted Nov 6, 2010 7:41 UTC (Sat) by rossburton (subscriber, #7254) [Link]

Actually the main Wayland developer is employed by Intel now.

Rethinking remote display

Posted Nov 6, 2010 1:55 UTC (Sat) by Lennie (subscriber, #49641) [Link]

SPICE does look pretty efficient network wise. Although I think it is currently pretty tied to kvm/qemu, but I'm not sure.

Rethinking remote display

Posted Nov 14, 2010 23:23 UTC (Sun) by alon (guest, #71176) [Link]

It is tied to qemu (not really to kvm, although we don't run it on anything else), but should be possible to make run without qemu - although it would require some changes to the design (right now it is built on the trio guest-host-client, if you remove the host the structure changes a little). bandwidth efficiency wise it should be the same.

Rethinking remote display

Posted Nov 5, 2010 22:42 UTC (Fri) by foom (subscriber, #14868) [Link]

> "Just do it in the browser" seems a step backward from the security point of view; the HTTP server isn't running as me.

But why not? You can run an HTTP server as you. And then ideally you'd want to use some token-passing mechanism to prove to the server that you are you. Like, as browsers already implement, Kerberos. Maybe with the added detail of running a local KDC on every machine, as Apple does, to avoid needing to setup central kerberos infrastructure.

Rethinking remote display

Posted Nov 5, 2010 23:30 UTC (Fri) by rwmj (subscriber, #5474) [Link]

Exactly. I used remote X all the time, and this "post X" whatever stuff is simply not going to be useful to me. And if they think Mac OS X is some sort of model, "just run X on top", then it's not -- X support is a clunky second-class citizen, and all the native apps are not network transparent so not nearly as useful as they could be.

Rich.

LPC: Life after X

Posted Nov 5, 2010 22:15 UTC (Fri) by dskoll (subscriber, #1630) [Link]

I don't understand why "Network Transparency" and "High-Performance" need to be mutually-exclusive. It seems to me that if the design is done carefully, then the graphics API can dynamically load a high-performance library that renders directly on the local machine for local clients, and a network-aware library for remote rendering otherwise. Something like this pseudo-code:

if (running_locally()) {
    handle = dlopen("fast_local_renderer.so");
} else {
    handle = dlopen("slow_network_renderer.so");
}

/* Now use handle for all rendering calls internally */

All of this would be hidden from the application. It would just make calls to the rendering library and not know or care which rendering engine is actually used. Of course, on the other end, there would need to be a network-listening application to accept connections from clients, but that wouldn't even have to be a core part of the renderer. It would just use whatever network protocol is decided on and boil it down to local rendering requests.

Mobile devices that don't need network transparency just wouldn't ship slow_network_renderer.so or the network-aware server application.

LPC: Life after X

Posted Nov 5, 2010 22:37 UTC (Fri) by drag (guest, #31333) [Link]

> I don't understand why "Network Transparency" and "High-Performance" need to be mutually-exclusive.

I don't think that people are suggesting that. At least especially Keith is not suggesting that.

It is just that X, without big changes, is mutually exclusive with performance. At least without huge headaches that nobody has to put up with on any other platform.

Something like that.

If getting a simpler and faster Linux means dumping X then sacrificing X Windows may be worth it. And it's not even true then that we need to give up X altogether. Windows and OS X users can use X remotely just as about as much as we can with Linux. They both can even host X clients... but you have to realize the fact that almost nobody does that may help indicate how little utility most people get from X Windows networking.

I like the networking aspect, certainly. Hell we finally have a decent sound server to go along with X networking: PulseAudio. Then AIGLX works on most hardware. I can host a Linux KVM Guest on my Ubuntu laptop running Redhat or Fedora and use GDM's secure remote login stuff to be able to get full GUI, natively, with OpenGL acceleration and sound with a VM!!! Gnome/GTK folks have put a lot of work optimizing their applications for it. X networking is better then it's ever been in the past!

But it still may not be enough. I just don't know. X Windows networking is just one good feature among a huge number of really bad and obsolete features that is becoming increasingly burdensome with no really positive effect except backwards compatibility.

LPC: Life after X

Posted Nov 7, 2010 20:00 UTC (Sun) by eru (subscriber, #2753) [Link]

Windows and OS X users can use X remotely just as about as much as we can with Linux. They both can even host X clients... but you have to realize the fact that almost nobody does that may help indicate how little utility most people get from X Windows networking.

That is because using remote X11 apps on Windows (I have no Mac experiences) is highly painful, compared to doing the same thing on Linux. There are just too many impedance mismatches with the native window system, and you need add the huge X11 server software.

I expect the same thing would happen on Linux with a non-X11 "native" window system. Remote X11 (or running legacy X11 apps locally) would get a lot less convenient.

LPC: Life after X

Posted Nov 8, 2010 5:04 UTC (Mon) by jzbiciak (guest, #5246) [Link]

Well, there is one fundamental reason why it might turn out differently: If the folks who most recently maintained X develop its successor with bridging from X in mind, it's more likely to be seamless.

Bridging from X was never a design goal of Windows or MacOS.

LPC: Life after X

Posted Nov 8, 2010 9:31 UTC (Mon) by quotemstr (subscriber, #45331) [Link]

I'm not so sanguine about compatibility: we're talking about people who simply deny the existence of legitimate user requirements like network transparency. It'd be a small step for them to simply deny that users need to run X programs as well. Hopefully, they'll just be ignored. People have been trying to replace X for decades but nothing's taken off yet. I see no reason why this effort would succeed where previous ones failed.

LPC: Life after X

Posted Nov 8, 2010 10:01 UTC (Mon) by jzbiciak (guest, #5246) [Link]

People have been trying to replace X for decades but nothing's taken off yet.

Hmmm.... I believe X has probably 1% - 2% share in graphical desktops across all computers. Just because nothing's displaced it in the UNIX/Linux space doesn't mean it's a model of wild success writ large.

And that's more than adequate to explain the lack of attention X usability / compatibility / impedance matching gets from the Windows and Mac crowds. It's not surprising X clients don't interoperate well with the native environment in those worlds. Given then fact that Wayland's coming from within the 1% - 2% community gives me at least some hope that they actually understand how X gets used and how to make it work and work well.

As to the network transparency debate: There seems to be multiple levels here, and it all gets oversimplified when people fixate on the fact that local clients are the default focus of Wayland. There's the X way of doing things, which everyone is comfortable with inasmuch as they have programs that use it that they rely on and that they know work. At the far other end of the spectrum are windowing environments that offer no remote access model. These are an endangered species. And in between we have a number of different options, ranging from NX, which apparently keeps the drawing-primitive flavor of X with a lot of other improvements, to VNC/RDP, which move more in the bitmap-drawing direction.

Personally, I'm still not 100% sold on the network transparency at the graphic-primitive level. Nothing in Wayland seems to truly get in the way of network transparency, as long as you're comfortable with a rendered-bitmap level protocol between client and server, rather than a more highly structured graphic primitive protocol.

LPC: Life after X

Posted Nov 8, 2010 10:42 UTC (Mon) by nhippi (guest, #34640) [Link]

> People have been trying to replace X for decades but nothing's taken off yet.

Heard about the newfangled fads "windows" and "Os X" ? X11 market share is truly marginal compared its heyday. Even considering only Linux/Unix, there are probably more android users than X11 users.

As Keith points out, network transparency is done mostly with HTTP these days.

LPC: Life after X

Posted Nov 22, 2010 6:31 UTC (Mon) by dododge (guest, #2870) [Link]

There are just too many impedance mismatches with the native window system, and you need add the huge X11 server software.

It's better than it used to be. In my current office the workstations are controlled by corporate IT, and so I'm forced to use a Windows XP desktop to do my Linux development. XMing is a small and free native X server for Windows that is very easy to install and run, and can mix remote X11 clients with native Windows apps on the same desktop. Many current X11 applications use client-side text rendering, which greatly reduces the headaches of getting core fonts working well on the Windows side. Microsoft's own TweakUI will enable X-style pointer focus to work across the entire desktop, which helps to smooth things out. By adjusting a registry setting I even have caps lock mapped to work like a second left-control key across all applications, and since it's part of my profile (rather than a system setting) it doesn't cause trouble for other users of the same machine.

It's certainly not perfect, but my desktop is about 3/4 Linux+X11+ssh applications and aside from the native ones not doing middle-mouse pasting it's very easy to forget that there's any Windows stuff on there at all.

LPC: Life after X

Posted Nov 6, 2010 6:07 UTC (Sat) by russell (guest, #10458) [Link]

mobile devices would benefit greatly from network transparency. To use a proper keyboard and mouse would be great.

LPC: Life after X

Posted Nov 6, 2010 6:45 UTC (Sat) by elanthis (guest, #6227) [Link]

Yeah, funnily enough, that's exactly what X does now.

And still doesn't work out that well, because you've got a few big ticket issues. First is that you need to develop and maintain that network-aware protocol, which is actually pretty hard. Image-based network transparency needs a single protocol for transmitting image deltas and input events, and nothing else. An accelerated protocol either needs a generic RPC framework (which has historically proven to be heinously inefficient) or a custom-tailored protocol that has to be updated every few months for the new OpenGL versions and extensions. Plus, the network protocol _still_ is transmitting images all over the place, because modern apps still do a ton of image processing client-side, and rely on really fast buses between the client app and the GPU to keep things running smoothly, so you still end up needing a fast image transfer portion to the protocol. Plus there's the latency issues -- and this is the real killer -- because apps are using the GPU more and more for non-graphical processing, and data needs to transition back and forth between the GPU and CPU many times when carrying out even relatively simple tasks. For instance, picking an object when you click a mouse button can and is implemented in many apps and frameworks by rendering silhouettes in fixed colors to a buffer, then selecting the color at the texel corresponding to the cursor. Separating the CPU and GPU will make this god-awful slow.

Turns out that the image-based remoting works damn fast. Modern image compression techniques blow away what we had when VNC and X were still the top dogs in remoting. It's really quite trivial to push 60 frames per second of 1080p image content reliably on low-bandwidth pipes (and that's FAR more than any app other than a video player or game would ever need), and then all you need is the input events being sent back.

And there's no reason this has to be a whole-desktop affair and not a trivially easy to use per-application transparent setup. Wayland, in fact, makes this way easier than X does! Redirect the app to an offscreen buffer (just like the compositor already does), but instead of rendering it to the screen you instead compress, motion-diff, and encode the data and push it across a channel in your SSH session (just like X already does), and then the remote end decodes and displays the result. Send input events back. Super easy and there's no reason it would be any more difficult to use as an end-user than what X gives you today. And it works better, is forward-compatible with whatever advances come about in GPU technology, etc.

About the only thing you'll lose is the ability for a headless box with no GPU to get accelerated rendering on your desktop, but as I pointed out above you don't have that anyway, at least not for anything beyond the incredibly anemic and borderline useless 1.x versions of OpenGL.

Oh, and the Wayland-based network transparency could reuse an existing image-based protocol so it actually becomes easy to display those networked UNIX/Linux apps on a Windows/OSX/iOS/whatever machine without needing to jump through the hoops of getting a huge crazy X server installed. In fact, it can transparently work with any app from any OS. Neat.

Is all this written and working yet? No, of course not. That's no reason to claim it can't be written, though, or that it won't. In the end, the people who actually need something better than X and are willing to put in that effort are going to get what they want, and the people who want to keep esoteric X features but aren't willing to do the work to keep them on the modern architecture will lose out. That is the way of Open Source, as it is.

LPC: Life after X

Posted Nov 6, 2010 7:53 UTC (Sat) by dskoll (subscriber, #1630) [Link]

Turns out that the image-based remoting works damn fast. Modern image compression techniques blow away what we had when VNC and X were still the top dogs in remoting.

The problem is that image-based remoting needs to use lossless compression or your applications end up looking like crap. I'm not so sure that can be done as easily as you suggest.

LPC: Life after X

Posted Nov 6, 2010 10:36 UTC (Sat) by nim-nim (subscriber, #34454) [Link]

1080p is far more than a video player needs, but it is far less than most apps need.

Because most apps display text, and clean text rendering requires a lot more pixels than that (not to mention that all the current tricks to mitigate the effects of the lack of pixels for text rely on knowing exactly the hardware pixel layout, something you can't do with a generic image protocol)

Image protocols are good enough only if you restrict remoting to special emergency users where text redered like crap does not matter.

LPC: Life after X

Posted Nov 6, 2010 11:43 UTC (Sat) by alankila (guest, #47141) [Link]

Completely hypothetically:

Why couldn't there be a negotiation that describes the optimal pixel format so that GUI programs could render their text optimally?

Also text doesn't change all the time, so it probably takes far less bandwidth to deal with than video in average, because you only have to transmit the pixels once, and then user spends a lot of time reading. If you know you're dealing with text (say, because the user told you so) you could disable lossy compression schemes too.

Favorite rant of mine:

Not that I expect introduction of Wayland to result in good text rendering on Linux. The text layering on window image has always been treated as a gamma=1.0 alphablending problem on Linux, the end result being awful color fringing and varying weight on diagonals. These problems are not going to go away until someone finally designs it right from day one.

All I can hope for is that by complaining about this eventually someone will wake up and design a text layering pipeline that can do gamma-corrected alphablending. Until that way, our fonts will continue to suck. There has been some hope recently with sRGB surface support specced in OpenGL, so I can only hope and beg that this flag will get turned on whenever a bitmap representing text is about to get combined with underlying graphics.

Color Management in a Wayland World

Posted Nov 7, 2010 15:06 UTC (Sun) by ccurtis (guest, #49713) [Link]

There is a link to the mailing list from the project site here: http://wayland.freedesktop.org/

Nothing is mentioned in any of the documentation I've been able to find and I think it's valid to ask who's doing color management. Wayland appears to be oblivious to the issue and pushing it into every client seems like a mistake.

I hear this issue is basically solved on the OSX display server and whatever comes after X needs to have parity here. The problem may be most obvious in fonts and antialiasing, but it's certainly not limited to it.

LPC: Life after X

Posted Nov 6, 2010 13:00 UTC (Sat) by Los__D (guest, #15263) [Link]

Errr, what are you talking about?

Most displays today is less than 1920x1080, and by far the most apps needs less than that.

LPC: Life after X

Posted Nov 6, 2010 14:44 UTC (Sat) by dskoll (subscriber, #1630) [Link]

Plus, the network protocol _still_ is transmitting images all over the place, because modern apps still do a ton of image processing client-side

And that's the real problem. If apps restricted themselves to using drawing primitives, the network wouldn't be a problem. Requesting the server to draw rectangles, polygons, etc. is extremely efficient and beats any kind of image compression.

The problem is that apps decided they wanted fancy textures, background images, etc. So they essentially did all the rendering client-side and treated the X protocol as a big dumb pipe for shipping images.

The correct approach is to move more support for fancy effects into the server so the clients don't have to render on their end. Unfortunately, there's no easy way to get around the use of textures and images, so here's a radical concept: Don't do it unless it's necessary (eg, for games.) And provide an escape-hatch (eg, shared memory) for local apps that really do need to sling lots of images and textures around.

LPC: Life after X

Posted Nov 6, 2010 18:51 UTC (Sat) by drag (guest, #31333) [Link]

With X a lot of the problem for poor performance is latency, not bandwidth.

Sending a compressed texture that is something like 1024x768 over most networks is not going to be a problem any more in a lot of cases.

A lossless 1280x800 PNG image itself is only something like 260.1KB, which will transfer over most internet connections in a fraction of a second. High quality JPEG or WebP is even smaller and compresses much faster with relatively little discernible image problems.

It's when you run into issues with applications that want to have something like a animated menu or whatever that takes 100 redraws to go from start to finish. When your on a local machine something like that is just stupidly fast and it is irrelevant. When your over the internet something that used to take 0.1 second now takes 5 due to all the time lost to latency from going back and forth 'draw' 'finished draw' 'draw again' etc etc.

When the fastest network most people had was 10 people sharing a single 10Mb/s ethernet on a single hub with all of them sharing the same collision domain... THEN that was when X networking was very troublesome in terms of Mb/s used.

Nowadays even common consumer internet connections are faster then that.

But when you have 128msec latency and it takes 2000 round trips between a server and a client to draw a new web page on your browser.... THEN that is when you run into serious performance problems. It does not matter if your sending 10's of KBs of information or your just sending 5Bs each trip it's going to create huge delays.

Your far better off just taking a image of a 1024x768 desktop at 15 FPS, sending it over a network then working on some special protocol to relay input back. (I am not sure about SPICE or ICA, but I am pretty sure that their technology is more sophisticated then just that.)

This is why people report VNC working better then X when it's obvious that in terms of actual bandwidth used X is often going to be better.

But it's not like VNC or X is even close to the state of the art. Both of them are obsolete with their own set of problems.

Seriously, check out:
http://www.gotomypc.com/remote_access/remote_access

While people have been arguing over the merits of being able to remote access a single application over X vs a entire desktop over VNC.... the ability to remotely access your GUI over the internet has gone mainstream.

ANY PC, ANY Mac. Over your browser. Very simple to setup, relatively inexpensively, adequately secure, and good enough that the average customer can use it without pulling their hair out.

You can even do it on your iPhone or IPad....

Sure I am not going to use it and it's not suitable if you care about your security, but the networking aspect of X is far from unique or special anymore and it's performance in common situations is inadequate compared to contemporary solutions.

LPC: Life after X

Posted Nov 6, 2010 18:55 UTC (Sat) by drag (guest, #31333) [Link]

Oh. And with my job I have the unfortunate requirement of having to use a number of windows-only applications on a regular basis.

Many of these applications are, in fact, virtualized and are individually remote'd to my desktop. The way it is done is completely transparent and there is not a single non-technical user on a corporate windows desktop that will be able to tell you what applications are remote and which ones are local. They will not even be able to tell you that they are using virtualized applications remotely at all.

The experience is completely integrated and there is no discernible way, in terms of performance or image quality, to tell the difference between local and remote apps.

LPC: Life after X

Posted Nov 6, 2010 20:09 UTC (Sat) by dskoll (subscriber, #1630) [Link]

It's when you run into issues with applications that want to have something like a animated menu or whatever that takes 100 redraws to go from start to finish.

So it's just bad application design, then. Or if you really want fancy animations, you do as I suggested: Let the server do the fancy effects. The client sends a little bit of information specifying how to do the animation (start, number of steps, time increment, etc.) and the server handles it. Augmenting X or something similar to do that wouldn't be too hard.

Somewhat off-topic: The contraction of "you are" is "you're", not "your". Sorry... just a pet peeve...

LPC: Life after X

Posted Nov 7, 2010 4:39 UTC (Sun) by drag (guest, #31333) [Link]

> So it's just bad application design, then.

Sometimes. Or bad toolkit design, or maybe just a different theme that the user selected. And it's only bad design if your trying to run your software over a high latency link, otherwise it's probably quite sane.

> Or if you really want fancy animations, you do as I suggested: Let the server do the fancy effects. The client sends a little bit of information specifying how to do the animation (start, number of steps, time increment, etc.) and the server handles it.

Ya. OpenGL works surprisingly well sometimes. When AIGLX first was supported on my video card I ran 1024x768 Wolfenstien (Quake3 improved engine) over wireless. Worked very well and I got about 48-60 FPS.

It was playable for the most part, except the mouse lagged horribly. Keyboard input was fast and everything rendered fine otherwise.

Cairo and Clutter may help out quite a bit I suppose. I don't know for certain.

> Augmenting X or something similar to do that wouldn't be too hard.

It's called NoMachine NX. ;)

LPC: Life after X

Posted Nov 7, 2010 7:14 UTC (Sun) by mfedyk (guest, #55303) [Link]

"It was playable for the most part, except the mouse lagged horribly. Keyboard input was fast and everything rendered fine otherwise."

that is because mouse processing takes so many round trips. to see this in action run dstat in a terminal, then move your mouse in a slow steady circle. your context switches per second will go up several hundred.

Facts of life

Posted Nov 7, 2010 15:35 UTC (Sun) by khim (subscriber, #9252) [Link]

So it's just bad application design, then.

Sure.

Or if you really want fancy animations, you do as I suggested: Let the server do the fancy effects.

No - because client-side solution is even simpler to implement.

The client sends a little bit of information specifying how to do the animation (start, number of steps, time increment, etc.) and the server handles it.

No again. It's harder then to write the loop in program.

Augmenting X or something similar to do that wouldn't be too hard.

Well - can you do it in such a way as to make that it's easier to use your scheme then the naive animation implementation?

You can not do that. Applications the sole justification for the fancy protocols, kernels and computers. Thus application writers dictate the rules. The only time when you can impose some restrictions is when they are caused by the law of physics. Because it's the only situation where all alternatives will impose the same restrictions so application developers will have no choice: otherwise the solution with the simplest usage from coder POV will always win.

Witness fate of transputers and modern CPUs: today way have 16-way SMP on desktop while transputers offered 32 or 64 twenty years ago. Why we've only switched to SMP on desktop five years ago? Easy: before that it was possible to create more and more powerful UP machines. Only when UP machines hit the hard limit (speed of light, essentially) the direction changed.

The same with animated menus today: application developers do and will continue to do applications which abuse fast CPU<->GPU connection till it works on desktop. They will not change their designs to accommodate "network transparency" - because their users mostly don't care. Thus all these ideas are not worth even talking about. If you can explain why/if they will stop working even on desktop - we can talk about redesign.

LPC: Life after X

Posted Nov 8, 2010 4:46 UTC (Mon) by elanthis (guest, #6227) [Link]

Your so-called solutions goes against X's core design and principle, though. X doesn't do fancy effects, or any effects. X is mechanism, not policy. X is just a dumb rendering and event pipeline, by design.

The drawing primitives suck. You may not realize this, it's probably been a long time since you've seen or worked with a mainstream app that limits itself solely to the X drawing primitives (probably).

We don't want that. Users don't want that. The whole world -- other than a teeny tiny little fraction of people so small in significance as to be entirely irrelevant -- wants pretty UIs. Pretty UIs are actually _more usable_, as tasteful and skillful application of that prettiness results in more easily comprehensible and digestible information display and user focus direction. Put plainly: that shit matters.

If you're really that interested in continuing to use the Xerox Parc UI innovations and nothing else, knock yourself out. Arguing today that the people designing the graphics framework that goes beyond what X is capable of should instead stick to basic rendering primitives is every last bit as stupid as an old-time radio host arguing the radio is the best media ever while the Internet already started killing off TV which already killed radio dead.

The other problem is that you seem to have no grasp of how modern rendering is done. When I say "the client" does the rendering, what I actually mean is that the client is programming a GPU to do the rendering. What you end up wanting to do things your way then is a complete implementation of OpenGL over the pipe (which GLX is NOT in any way). You also completely ignored the parts about using the GPU for more than just graphics, too, including input handling and other general computation that you absolutely do not want in the display server, at all, period.

Your notion of how the desktop should work is wrong, dated, and (thankfully) totally irrelevant as you're not the one making the development decisions.

LPC: Life after X

Posted Nov 8, 2010 9:27 UTC (Mon) by quotemstr (subscriber, #45331) [Link]

You missed the point. He's proposing something like a motion-XRENDER, not animation done using the traditional drawing primitives. There is no reason modern UIs cannot be accommodated in X's extension framework. If people like you make "development decisions", we'll all be impoverished.

LPC: Life after X

Posted Nov 8, 2010 16:44 UTC (Mon) by dskoll (subscriber, #1630) [Link]

Your so-called solutions goes against X's core design and principle, though. X doesn't do fancy effects, or any effects. X is mechanism, not policy. X is just a dumb rendering and event pipeline, by design.

Yeah, so? Change that aspect of X rather than throwing out the whole thing. Eventually, the best way to do things would be to have toolkits like Qt and Gtk be loadable modules that get installed in the X server rather than in client applications. That could greatly reduce the number of network round-trips required and greatly mitigate the latency problem.

There are plenty of security concerns with this, of course. You wouldn't want to load Gtk or Qt into the X server unless it's running with your UID. But that's a much easier problem to solve and a much smoother transition to the future than throwing out X completely.

Smooth transition? From what?

Posted Nov 9, 2010 15:23 UTC (Tue) by khim (subscriber, #9252) [Link]

But that's a much easier problem to solve and a much smoother transition to the future than throwing out X completely.

How come? You seem to assume that developers are using X and the only problem here is some shortcomings. Well, newsflash: no, they don't! Most applications today are written for Windows, PS3, Wii, iOS or Android. Not for X. Developers know toolkits (mostly GDI, but sometimes WPF or even Qt) and DirectX/OpenGL. They don't know X and then don't want to know X. This is fact of life. That's why all these band-aids are doomed: they impose burden on developers for megligible benefit.

X is this thing down there which only exist to make our life misarable - this is POV of many (most?) developers. That's why it must be removed.

Eventually, the best way to do things would be to have toolkits like Qt and Gtk be loadable modules that get installed in the X server rather than in client applications.

But why introduce this stupid layer at all? Give the developers the means to run client app which talks with GPU and server - and he'll decide how to split the work. This is how it works on Windows, XBox360 and PS3 - and it certainly attracted significantly more developers then X redesigns ever could.

Smooth transition? From what?

Posted Nov 17, 2010 7:11 UTC (Wed) by mcrbids (guest, #6453) [Link]

<QUOTE>Most applications today are written for Windows, PS3, Wii, iOS or Android. Not for X. Developers know toolkits (mostly GDI, but sometimes WPF or even Qt) and DirectX/OpenGL. They don't know X and then don't want to know X.</quote>

As an application developer with over 10 years of experience, I can say with certainty that this is not true, at least, not for me.

I don't write applications for Windows, PS3, Linux, Android, or *any* of the platforms listed. I write for the web browser! I write complex, data-driven applications and it's been a very, very long time since I wrote anything that wouldn't easily work on Win/Mac/Lin/Android/Iphone/Xbox and anything else with a reasonable browser.

The browser I most target is Firefox since it seems to be the most "Cross platform" although Chrome is close. I develop on Linux, it runs FF well, I don't worry about viruses and stuff like that, and can offer excellent compatibility with all my clients.

I don't want to replace X - I get the best of all possible worlds by making the specific rendering requirements of my applications something handled by the context of the user. And I use network transparency all the time - I can run several Firefox instances concurrently, on the desktop, as different users, without any danger of cookie or session interaction between browsers. As a web-based, network application developer, this is so incredibly useful!

LPC: Life after X

Posted Nov 8, 2010 16:48 UTC (Mon) by dskoll (subscriber, #1630) [Link]

Your notion of how the desktop should work is wrong, dated, and (thankfully) totally irrelevant as you're not the one making the development decisions.

That certainly deserves a *plonk*. How about trying to stay civil?

LPC: Life after X

Posted Nov 8, 2010 12:16 UTC (Mon) by dgm (subscriber, #49227) [Link]

> The problem is that apps decided they wanted fancy textures, background images, etc.

No, the problem is that users want this. You can blame them as much as you want, but it's all futile because in the end users still want that.

We have two options, we can build the stuff that's needed, of we can close our selves in our small ivory tower and wait until the world goes away. Which one do you want to chose?

LPC: Life after X

Posted Nov 8, 2010 14:17 UTC (Mon) by madscientist (subscriber, #16861) [Link]

> We have two options, we can build the stuff that's needed, of we can
> close our selves in our small ivory tower and wait until the world
> goes away.

The question is, build what's needed by WHOM?

You've had many, many people (myself included) state that what the users (us) want and need is the equivalent of today's network transparency features that X provides.

Yet many people here seem perfectly happy to ignore and pooh-pooh the users who really matter (those that we already have!) in some quest to obtain users that we don't have (those that use, and almost certainly will continue to use, Macs and Windows systems).

Ivory tower dwellers are people too!

WHOM is right :-)

Posted Nov 9, 2010 15:57 UTC (Tue) by khim (subscriber, #9252) [Link]

The question is, build what's needed by WHOM?

Well, paying customers, obviously. Someone must pay for the development of the new hardware and software - and "he who pays the piper calls the tune".

You've had many, many people (myself included) state that what the users (us) want and need is the equivalent of today's network transparency features that X provides.

Sure. And there are many, many people who care about z80 programming on TI-84. Should we now think about z80 compatibility when we write USB drivers?

Yet many people here seem perfectly happy to ignore and pooh-pooh the users who really matter (those that we already have!) in some quest to obtain users that we don't have (those that use, and almost certainly will continue to use, Macs and Windows systems).

Huh? Where have you got this idea?

Fact: most Linux users don't care about network transparency at all.

Proof: more then 50 million Android phones were sold till now. There are less then 1.5 billions PCs in the world today so if we assume 3% of them are using Linux (typical estimate) we'll find out that there are more Android users then desktop Linux users.

"Traditional Linux users" who care about network transparency and other such things are fast becoming minority - and this is exactly why we are talking about "life after X" today.

We can, of course, say that we only care about ourselves and say that Android, ChromeOS, MeeGo and other such things are "not Linux" and "we don't care" - but it'll only mean that Linux will be left in dust and non-Linux (based on the same kernel!) will replace it.

WHOM is right :-)

Posted Nov 12, 2010 0:40 UTC (Fri) by jmorris42 (guest, #2203) [Link]

> Proof: more then 50 million Android phones were sold till now.

Who cares. Seriously, counting Android as Linux users is as crazy as counting OS X users as UNIX, in both cases the *NIX underpinnings are only used as a hardware abstraction to host an alien environment to the typical Linux/GNU/X we typically call "Linux" as a shorthand. More power to em and all, but lets not make Apples to Oranges comparisions.

> "Traditional Linux users" who care about network transparency
> and other such things are fast becoming minority

Well take yer asses off and do your clone of all the bad ideas the existing userbase fled screaming from. It is Free Software after all. Here is a clue: Most of us came to *NIX because we saw the benefits and could also see the horrid mess the DOS/Windows world was. Why do we now want to toss one of the most wonderful ideas in computing to become more like the mass market drones? The network is the computer, the computer is the network. It isn't just a marketing slogan for some of us.

Besides, I never understood why suddenly everything needs to be rendered client side as bitmaps. Extend X to give it the mechanisms to support things it can't currently do and make sure they can run over the wire. Why can't X do the modern font tricks, compositing or whatever over the wire? Why isn't the default scalable SVG artwork? Isn't it normally sufficient to note that Apple does something enough to end an argument? Well they do Display Postscript (ok, they had to modernize the terminology to Display PDF) on OS X so obviously the idea is still mainstream, right?

Is it good that Linux is finally getting the last chunk of device support and taking over mode setting and general driving of the video hardware? Of course. Is it good that makes it easier for upstarts to experiment with new display systems? Of course! Might this someday lead to replacement for X? Perhaps. But any replacement has to be able to do the things X does if the target is a general computing environment instead of embedded. We have decades of existing apps and little desire to toss em all and start over. If we wanted to do that we could all just ditch the whole stack and go run Haiku or something.

LPC: Life after X

Posted Nov 11, 2010 10:09 UTC (Thu) by dgm (subscriber, #49227) [Link]

> The question is, build what's needed by WHOM?

By us, of course! I'm sorry, because I'm going to introduce a little bit of FUD, but I think it's a healthy dose:

The problem is not that we are not well served by what we currently have. The problem is that most computer users out there are not. And as a consequence those people select other operating systems. Why should we care? Because marginal platforms can only survive thus far. Yes, there are still people writing code for the C64 and the Amiga, but they are few and mostly irrelevant, except for themselves. Do you want to see Linux there? I certainly don't.

So, It's OUR need to appeal to as many users as we can, and serve them well in as many use cases as we can, to ensure our own survival. Specialization and immobilism will be death for the platform.

LPC: Life after X

Posted Nov 6, 2010 20:48 UTC (Sat) by njs (subscriber, #40338) [Link]

> Redirect the app to an offscreen buffer (just like the compositor already does), but instead of rendering it to the screen you instead compress, motion-diff, and encode the data and push it across a channel in your SSH session (just like X already does), and then the remote end decodes and displays the result. Send input events back. Super easy

Yeah, it's entirely feasible to do this with the current X stack (start a head-less X session, run apps in there, and then attach as a compositing manager to get at the pixels in each window and ship them across SSH individually). I use this every day, and the initial version was a two weekend hack.

It turns out that all the hard parts are in dealing with X nonsense -- coordinating app<->WM interaction between the two sides is nasty, you have to juggle this ridiculous stack with your headless X server, and I'm not convinced that it's possible to get keyboard handling really right.

So I agree with those saying that X network transparency is not very interesting anymore -- we can and will accomplish network transparency somehow, and it'll likely be better for not being baked into the gui system itself. OTOH, a lot of that complexity I mention is just intrinsic in the task -- you need conventions for how apps will talk with each other, input will be configured and routed, etc., and that complexity will end up in your protocol one way or another.

LPC: Life after X

Posted Nov 5, 2010 23:35 UTC (Fri) by jengelh (subscriber, #33263) [Link]

>and VDPAU is an improvement over Xv

My experience with a Digital Signage project: VDPAU has been quite some crap. It requires significantly more CPU power when displaying two 1920×1080 WMV3 clips at the same time compared to Xv. (With a single one, it's all fine—go figure.) If it was not for the nvidia driver only providing a maximum XvImage size of 2048×2048, I would still use Xv to display a single 3840×1080 clip. And since I have seen 8192×8192 offered somewhere by some radeon* or so driver, I don't believe that 2048 is due to some hardware limitation.

Xv hardware limit

Posted Nov 6, 2010 11:10 UTC (Sat) by tialaramex (subscriber, #21167) [Link]

It is / can be a hardware limitation. Which is why there are advertised limits.

You mentioned Radeon. If you buy a brand new radeon desktop chipset, that does a nice big Xvimage properly. But if you go back a few years (or go down a few price ranges) you get older chips with a maximum of... drum roll... 2048x2048

It was a lot back then. I have a desktop PC with TWO video outs, and a maximum supported desktop width across those two outputs of 2048. What use is that? But when the chip was made 1024x768 was still a decent resolution and so 2048 was two whole monitors...

I don't know which NVidia driver you're looking at. It's possible (not certain) that newer NVidia hardware can do 8192x8192 but your driver doesn't expose that through an oversight or because it uses an older hardware spec. and hasn't been updated for a new one where some extra bitfields were added or a new method was used for larger textures.

LPC: Life after X

Posted Nov 6, 2010 10:31 UTC (Sat) by AlexHudson (guest, #41828) [Link]

What a difference that presenting an issue in a constructive way makes. Presenting the entire problem space, looking at different solutions, and showing a couple of paths out.

The best leaders we have in the free software space have this ability to talk about the problem which shows people they've thought about the various corner-cases and that their way forward is both practical and progressive.

It would be easy to compare this directly to the unilateral statement Mark Shuttleworth made the other day; I suspect that would be a little bit trite, but the point stands: you can only lead when you take people with you.

LPC: Life after X

Posted Nov 6, 2010 11:46 UTC (Sat) by robert_s (subscriber, #42402) [Link]

"If we get rid of window managers somebody else has to do that work; Windows and Mac OS push that task into applications, maybe we should too."

Please god no.

Frankly I quite like the way the design of X to some extent enforces the sane behavior of my desktop applications. This new "post-X" world is sounding a lot like the wild west in comparison.

LPC: Life after X

Posted Nov 6, 2010 16:24 UTC (Sat) by kberg (guest, #4963) [Link]

+1 "Please god no."

Anyone who thinks having applications manage their own windows is a good thing needs to spend more time with applications that become unresponsive on Windows. If you think about this from usability perspective for newbies, clicking on the "X" in the upper right hand corner of an unresponsive app is much easier than trying to find and learn about Task Manager on Windows if you don't already know about it.

LPC: Life after X

Posted Nov 6, 2010 19:14 UTC (Sat) by alankila (guest, #47141) [Link]

Err. Not sure what you are complaining. Windows does note that you try to interact with nonresponsive app and asks you if you want it killed. It does take some time before it decides that it's gone, though.

LPC: Life after X

Posted Nov 6, 2010 19:30 UTC (Sat) by halla (subscriber, #14185) [Link]

On OSX, a hanging app blocks the complete menu, including the apple system menu. I know about the shortcuts, but it's very inconvenient. To me, window management isn't an application concern.

LPC: Life after X

Posted Nov 7, 2010 5:24 UTC (Sun) by martinfick (subscriber, #4455) [Link]

It's not about killing unresponsive apps, it's about being able to live and operate with (potentailly temporary) slow apps without being impacted by them. Killing an app is hardly an acceptable solution. I can pull the plug too if things get really bad, but I doubt that is a desirable solution. :)

LPC: Life after X

Posted Nov 6, 2010 23:22 UTC (Sat) by The_Barbarian (guest, #48152) [Link]

+1 from me too

LPC: Life after X

Posted Nov 7, 2010 18:35 UTC (Sun) by iabervon (subscriber, #722) [Link]

The reason I think that window managers are necessary is some of the simple functions like focus management; Windows and OS X get around that by not supporting any useful policies, but it's highly unlikely that a set of applications would all agree on the behavior of my chosen focus policy. Furthermore, I have window manager hotkeys for things like window-shade, maximize, and iconify, and I've intentionally selected key combinations that applications don't generally detect (e.g., pressing both shift keys) so as not to interfere with application key usage.

For that matter, window management stuff can be important on a cell phone; while you're on a call, you should be able to put the call management application into a small portion of the screen and use arbitrary other applications in the majority of the screen, while still being able to control the call without interfering with the other application. For example, you're on a call, and you have to take notes, so you put it on speaker and go to a editor; while you're in the middle of taking notes, there's noise in the room you're in, so you need to mute, and later someone asks you about something in the notes, so you have to unmute, read from the application, and mute again. Phones I've seen don't provide a way to do this sort of stuff without needing to find the application (and sometimes document) again each time you do phone things. Outside of the desktop, there's little call for the ability to have a dozen ongoing tasks that you switch between, but there are situations in which you want two or three.

LPC: Life after X

Posted Nov 18, 2010 18:23 UTC (Thu) by tjc (guest, #137) [Link]

+1 from me as well.

We have graphical toolkits which can implement dynamic themes, so it is no longer necessary to run a separate window manager to impose a theme on the system.

It's not the theme that I'm concerned about, it's consistent window management, and the ability to do unpopular things, like lower windows with the mouse.

LPC: Life after X

Posted Nov 6, 2010 13:55 UTC (Sat) by kjp (guest, #39639) [Link]

How is the clipboard going to work? :) That's part of ICCCM, it appears, so I think an awful lot of people want copy and paste to work.

LPC: Life after X

Posted Nov 6, 2010 17:40 UTC (Sat) by i3839 (guest, #31386) [Link]

That's trivial to do with a library, you don't need X for that.
Copy and paste is trival no matter how you do it, the only thing
needed is to let all programs agree on how to do it.

I'd probably do it via shared memory, but there are plenty of
other ways of doing it.

LPC: Life after X

Posted Nov 6, 2010 19:30 UTC (Sat) by PO8 (guest, #41661) [Link]

"Copy and paste is trivial no matter how you do it."

HA Ha ha ha... you were joking, right?

I think that X stands as a 20-year demonstration of the opposite proposition. I used to think like you did, and swore at one point a while back that I would clean up X's cut-copy-paste issues. I even publicly solicited help, although I found it curious that the more experienced that people were with it, the more they seemed to not want to get involved :-). Once I investigated the scope of the problem, I just gave up.

The current ICCCM solution to X cut-copy-paste is horrible: incomplete, inadequately honored, and confusing as hell for users and developers. This is *not* because smart people haven't thought hard about the problem, individually and collectively. It's because cut-copy-paste is hard.

I once put an entire software engineering class through a six-week exercise in just trying to provide an adequate specification for X cut-copy-paste behavior. It was a total failure, and the students mostly ended up hating me. They felt like idiots because they couldn't get their heads around this "trivial-looking" problem. I hope it taught them some valuable lessons about requirements specification, anyhow.

Cut-copy-paste is the UI monster in the closet. Back away slowly.

You must be joking

Posted Nov 6, 2010 20:21 UTC (Sat) by i3839 (guest, #31386) [Link]

Sorry, no, it really is trivial. And I'm not talking about ICCCM or any other existing implementations, just if you were to start a new one from scratch, as is the case for Wayland.

You can try to make it awfully complicated by trying to copy and paste other stuff than text and not choosing a common character encoding (utf8), but then you have yourself to blame, not the trivial problem at hand.

And any sane copy&paste implementation that supports "complex" types is a simple text one at heart where the programs that want to support things like pictures do that by passing the needed information on in the form of text, and the complex data is just a temp file. Then they can mess up the extension area as much as they want without breaking simple text copy&paste for the rest of us.

Limiting the scope of something is better software engineering than trying to support everything.

But as you seem to have some experience with it, please enlighten us with one example why copy and paste is oh so hard.

Writing a 3D driver with decent performance, now that is non-trivial.

You must be joking

Posted Nov 6, 2010 20:30 UTC (Sat) by quotemstr (subscriber, #45331) [Link]

Oh, that's a great idea: if a user copies an image from his editing program and pastes it into his word processor, he gets hundreds of kilobytes of base64-encoded gunk? You clearly haven't thought through this problem. Any copy-paste system good for more than plain text requires type metadata.

You must be joking

Posted Nov 6, 2010 20:46 UTC (Sat) by i3839 (guest, #31386) [Link]

Sorry for not making myself not clear enough:

> And any sane copy&paste implementation that supports "complex" types is
> a simple text one at heart where the programs that want to support things
> like pictures do that by passing the needed information on in the form of
> text, and the complex data is just a temp file.

To clarify, "the needed information" is not the image data, but as you point out, a file path and other metadata, if needed.

So you won't get garbage, but something like "/tmp/stupidimagefile.png" (and perhaps the MIME type).

Does this addresses your point?

You must be joking

Posted Nov 6, 2010 20:48 UTC (Sat) by quotemstr (subscriber, #45331) [Link]

In-band signaling doesn't work. What if I want to paste the text "/tmp/stupidimagefile.png (image/jpg)"? You're just replacing one complex scheme with another; out-of-band signaling is more robust and after all is said and done with your encoding stuff, it's probably less complex.

You must be joking

Posted Nov 6, 2010 21:12 UTC (Sat) by i3839 (guest, #31386) [Link]

*sigh* I feared I would have to explain that too.

Best would be a copy&paste system that only supports text, but because people want to copy other crap, you have to support other crap somehow anyway. So yes, you have to add a type to the copied data. But instead of going the way of madness and trying to add a type for every bloody type of data there is, just add two: Normal plain text, and the extended plain text. The second is the file path + metadata (if needed). And low and behold, your copy&paste system is done and finished.

If you want to add network transparency, just copy the tmp file over. Simple as that. The CP system stays the same no matter what data types come and go.

Now instead of picking at unmentioned details, could anyone of you please come with real problems instead?

And no, I haven't thought it thoroughly through. There's no need, as I said, it's trivial stuff, as long as you don't shoot yourself in the foot.

So much for edit buttons...

Posted Nov 6, 2010 21:25 UTC (Sat) by i3839 (guest, #31386) [Link]

Same mistake as first: Forgot to mention that seeing the file path thing is the worst that would happen, in case a program just always blindly paste the text.

Then again, it's probably good if programs paste the text for types they can't handle, so the user can open the file directly.

That makes me think of one thing PO8 might find non-trivial: How to copy and paste complex data types around in a program, while using the same keys for system wide copy&paste.

Either just export and import that bit to a file of the format that program prefers, or just check before pasting whether it's the data the program copied before or something new. Simple enough.

Some other potentially scarry non-trivial problem: What if a program exits? Again, no problem, because shared memory would be used, so it's persistent data.

Aha! You might say, but if you use tmp files for colpex types, won't you leak data and fill the disk with them? At worst, yes, but that's what FF and other are already doing. Besides that, as the copy & paste thing would be implemented as a library, after copying something new the old thing can be deleted.

And so on and on. As I said, it's trivial.

So much for edit buttons...

Posted Nov 6, 2010 21:36 UTC (Sat) by quotemstr (subscriber, #45331) [Link]

Your scheme falls apart when users want to paste complex data types between multiple instances of the same program: yet another use-case gummed up by your ill-conceived scheme. We already have the program-exiting feature and know how to deal with it. Read the documentation of the actual protocols (hint: clipboard daemon in the X11 case).

Furthermore, "just seeing a file path" is NOT an acceptable outcome from a user-interface point of view. Again, that's the kind of statement only a developer could make. You seem to imagine a world of "simple" programs that would just slurp up the clipboard contents and use them. What you really want is content negotiation. We already have that.

This line of argumentation is exactly what's wrong with the people who want to reinvent X. They look at the complexity of the existing system, imagine it can be reduced, but when they try, they either create something worse or cut features, most of which people actually use. It really smacks of hubris to suppose that we're smarter than people were 20 years ago, and that we can do a better job of solving the same problems.

X is not the problem. Copy and paste isn't the problem. This asinine buzz around replacing X pops up every five years or so (remember Berlin?), and it has the same outcome every time. It's like saying, "my web browser doesn't pass the ACID3 test --- so let's reinvent TCP!"

So much for edit buttons...

Posted Nov 6, 2010 22:11 UTC (Sat) by i3839 (guest, #31386) [Link]

How so? It can make its own private type and dump data in there without bothering anyone else. I'd say copy and pasting between multiple instances of the same program is easiest as far as complex data goes.

> Furthermore, "just seeing a file path" is NOT an acceptable outcome from
> a user-interface point of view. Again, that's the kind of statement only
> a developer could make. You seem to imagine a world of "simple" programs
> that would just slurp up the clipboard contents and use them.

Yes, I'm a computer programmer, and by your attitude I guess you're not.

May I assume any technical discussion goes over your head and you're too stupid to follow what I say? I hope not, because I don't. You on the other hand seem to attribute my spartan approach to computering (Fluxbox, xterms, Nedit/Vim and LaTeX for the rest) to the fact I'm a programmer, and brush away my taste and opinion away as "something only a developer would say".

Pasting data is by definition slurping clipboard content and using it.

Now back to the discussion at hand:

> Furthermore, "just seeing a file path" is NOT an acceptable outcome from
> a user-interface point of view.

The alternative is that nothing would happen because the program didn't understand that format. So yes, I think it's a pretty acceptable outcome for something that can't work. And with my developers hat on, I'd say some outcome is better than nothing happening, because the latter is hard to debug and fix, while the first is plain and simple. For both users and developers. Users know what files are, developers know what's happening. But if you prefer the program can also not display anything, that doesn't change the CP system, though IMHO would be worse for the user experience.

As for your other ranting, I'm just saying that implementing a copy and paste system from scratch is technically trivial, no more and no less. It could be done in one day, maybe two. I just don't buy that copy & paste is hard stuff.

So much for edit buttons...

Posted Nov 6, 2010 22:22 UTC (Sat) by quotemstr (subscriber, #45331) [Link]

I am a programmer, but I seem to be one of the few who gives a damn about the experience of ordinary users. We're talking about a general-purpose mechanism, not a way for you to copy a path from one xterm to another. Myopic design like yours is the reason Linux has had quite limited success on the desktop.

The alternative is that nothing would happen because the program didn't understand that format. So yes, I think it's a pretty acceptable outcome for something that can't work. And with my developers hat on, I'd say some outcome is better than nothing happening
Finally, you make a good point. It's better for the user to receive some feedback than nothing at all, but you present a false dichotomy here. Nothing prevents an application from presenting a dialog box that says in clear, understandable $LC_MESSAGES "Sorry, but I don't know how to paste an image in this document".

Users have only the foggiest notion of what a file is. Hell, they conflate files with pictures on their desktop. They certainly don't know about paths: they know about sequences of clicks that bring up the right pictures. Asking them to deal with seeing "/tmp/asd5FAB34/image-jpeg.clipboard.tmp" when they meant to paste a hiking picture into an email is just an idea completely divorced from reality.

If you want to make a system purely for yourself and your ilk, fine. But don't go claim that implementing a system is trivial, then force users to wear the same hairshirt that you do.

As for your other ranting, I'm just saying that implementing a copy and paste system from scratch is technically trivial, no more and no less. It could be done in one day, maybe two. I just don't buy that copy & paste is hard stuff.
Ah, good old-fashioned arguments from assertion. Well, go ahead and "don't buy" that copy-and-paste is inherently complex. But reality is there regardless of whether you choose to stick your fingers in your ears, squeeze your eyes shut, and sing "it's easy! I know it's easy! Easy, easy, easy!" As I said, it's only simple if you have simple needs. You have no right to assert that users shouldn't need more features than you personally happen to use.

So much for edit buttons...

Posted Nov 7, 2010 10:41 UTC (Sun) by i3839 (guest, #31386) [Link]

You still haven't given one example that makes copy & paste hard, you just keep up coming with details that change the user experience, but not the actual copy&paste system much. I'm not sticking my fingers in my ears, I just haven't heard anyone giving a good example what makes copy & paste hard.

You keep missing the power of text: That I'd put in the filename doesn't mean you have to do that in your program too. Instead of /tmp/image.jpg you could put in "This program can't paste JPEG images!", or whatever the hell you like. That is IMHO a lot better than a stupid pop-up (I hate those). And if people see something they don't like, they can close their eyes, do ctrl+z and pretend nothing happened.

The fundamental problem of your "Sorry, but I don't know how to paste an image in this document" is that it's the program failing the paste that has to say it, while it doesn't know about the type that you tried pasting. So now it has to know that the type it can't handle is an image. And then you make the mistake of making the copy & paste system more complicated by adding a text description of the type, and adding all those "error" handling everywhere. Copy & paste is not worth that extra complexity, that's the way how you get bloated software.

Because no one can tell me what makes copy & paste hard, I'll do it myself: To enter copy & paste hell you have to want to copy from a complex type and paste it as another complex type, THAT is where the madness lies. And any sane copy & paste system will not try to solve that at all, because it can't, it's not the right place. So I'm not saying that copy & paste is always easy, I'm just saying that implementing a copy & paste system is easy. And 95% of the time hell is avoided, and the 5% that wants to deal with it can do it themselves without making the rest of the system more complicated. And the way to solve the complex to complex problem is to solve it on a case by case base, with programs agreeing how to do it per case. And the agreeing on pretty much means choosing an intermediate complex type, or agree on the ordering of provided types. And to do all that the list of types system is sufficient as far as I can tell. Make the interface extensible, just to be sure, and it's done.

The hard part of introducing a copy & paste system is to let people agree on its features and interface, because there are always people that think it should be made more complicated than warranted.

Copy and Paste

Posted Nov 7, 2010 15:23 UTC (Sun) by ccurtis (guest, #49713) [Link]

I do not understand what you two are squabbling about.

If an application builds a MIME message for each Cut/Copy operation, and receives a MIME message for each Paste, why does this not work?

The creator of the message can (in the case of a PDF reader, for example) present both text and pbm versions; the receiver of the message can take the text/plain part (if an xterm) or the image/pbm part (if an image editor), or prompt the user for the format they want if it so desires.

How the message is stored is immaterial - disk file, network server, whatever - why would this "text only" interface not work?

Copy and Paste

Posted Nov 7, 2010 15:30 UTC (Sun) by ccurtis (guest, #49713) [Link]

Bah. I didn't realize 90% of this article's comments are about C&P; I thought I reached the end of the thread. Sorry for the noise.

Now, why are threaded discussions on LWN so hard to follow - it should be easy! ;-)

So much for edit buttons...

Posted Nov 7, 2010 13:03 UTC (Sun) by jond (subscriber, #37669) [Link]

You've said it's easy several times but *this* impartial observer, at least, hasn't seen anything to support that, just a lot of hot air. At this point I have to say "show me the code". Especially if it really is only a day or twos work.

So much for edit buttons...

Posted Nov 8, 2010 15:10 UTC (Mon) by i3839 (guest, #31386) [Link]

I'm thinking about it.

I just checked out the Wayland repo and might give it a stab later this week.

Personally I find decentralized input event handling more important than CP though.

You must be joking

Posted Nov 6, 2010 21:25 UTC (Sat) by quotemstr (subscriber, #45331) [Link]

Only a developer could say the best copy-and-paste system would support only text. Your scheme falls apart because it relies on participating applications magically knowing the format of the data in the temporary file. Are they supposed to guess based on the file contents? You haven't eliminated the "every type there is problem"; instead, you've just pushed it down into a layer where it's even harder to get right. If you copy a spreadsheet, do you put CSV in the temporary file? What if you actually want to paste CSV? What if applications (especially across machines) don't guess file content types in the same way?

If you instead put the type information in the metadata, you basically have the system we have today *plus* the complexity of having to manage this temporary file and copy it across machines. Preemptively? On demand? Over what transport? Do transfers block the GUI?

Plus, you lost the ability to send multiple alternative data types. If I copy some text from a word processor and paste it into another word process, I want the formatting to be retained. If I paste that text into a terminal, I want plain text. There's no way to achive that without some fallback provision in the copy and paste system. This is a feature users want. They use it all the time, today.

You haven't solved any problems. You've obfuscated them and made them *worse* while removing features at the same time.

You must be joking

Posted Nov 6, 2010 21:51 UTC (Sat) by i3839 (guest, #31386) [Link]

> Your scheme falls apart because it relies on participating applications
> magically knowing the format of the data in the temporary file. Are they
> supposed to guess based on the file contents? You haven't eliminated the
> "every type there is problem"; instead, you've just pushed it down into a
> layer where it's even harder to get right.

Umm, that's a known problem and solved, as far it's possible to solve, for regular files. I'm pushing it to the place where this problem is already solved, instead of reinventing the wheel awkwardly. File extension seems good enough, but if you want MIME or whatever, that's easy enough to add.

> If you instead put the type information in the metadata, you basically
> have the system we have today *plus* the complexity of having to manage
> this temporary file and copy it across machines. Preemptively? On demand?
> Over what transport? Do transfers block the GUI?

No, today there is too much information passing between programs. I know nothing about it, but that closing an app makes the copied thing disappear tells me that there is too much cooperation going on when copying/pasting happens.

The copying is only needed for network transparency, if you want to copy stuff between local and remote apps. It's not hard to implement.

> Plus, you lost the ability to send multiple alternative data types. If
> I copy some text from a word processor and paste it into another word
> process, I want the formatting to be retained. If I paste that text into
> a terminal, I want plain text. There's no way to achive that without
> some fallback provision in the copy and paste system. This is a feature
> users want. They use it all the time, today.

This is actually a good point you make, finally.

Can't say I ever felt the need to do thing like this, but here goes:

Add a way to copy something multiple times in different formats, and let the program pasting it choose the type it prefers.

So instead of a singular item, you suddenly got a list or array of items. I give you that, my simple scheme before wouldn't handle this too well, unnecessarily cludgey for text if you pass the file path as text already. So instead always have a text representation (also the path in case there's nothing else), and a (hopefully usually empty) list of files.

This pushes the complexity of handling multiple types to where it belongs, while keeping the copy and paste system itself simple.

> You haven't solved any problems. You've obfuscated them and made them
> *worse* while removing features at the same time.

What problems? I don't see any.

You must be joking

Posted Nov 6, 2010 21:54 UTC (Sat) by quotemstr (subscriber, #45331) [Link]

Add a way to copy something multiple times in different formats, and let the program pasting it choose the type it prefers.

So instead of a singular item, you suddenly got a list or array of items. I give you that, my simple scheme before wouldn't handle this too well, unnecessarily cludgey for text if you pass the file path as text already. So instead always have a text representation (also the path in case there's nothing else), and a (hopefully usually empty) list of files.

This pushes the complexity of handling multiple types to where it belongs, while keeping the copy and paste system itself simple.

All right. Now we're arrived at a complex tagged array scheme with server-managed copying. In what specific way is this better than (or hell, different from) what we have today?

You must be joking

Posted Nov 6, 2010 22:18 UTC (Sat) by i3839 (guest, #31386) [Link]

It's not server managed, it's implemented as a library function, for one. There's no complex copying around, only simple copying around when needed.

Second, primarily it's a simple text based CP system, but for the people that want silly complex types, it supports an array of path/type pairs. That is where the API ends and the system is complete. Technically it's all very simple and straightforward.

As for whether it's better or not than the current system, I don't know. Read my first post, all I'm saying is that you don't need X for copy&paste and that it's trivial enough to implement.

You must be joking

Posted Nov 8, 2010 12:47 UTC (Mon) by dgm (subscriber, #49227) [Link]

> Add a way to copy something multiple times in different formats, and let the program pasting it choose the type it prefers.

What a waste.

You need to realize that Cut and Paste is really an IPC, and one that requieres data format negotiation.

Also, you cannot rely on temporary files for that on X11, both applications need not be on the same machine, nor on the same network. Maybe they cannot see eachother directly, either.

You must be joking

Posted Nov 8, 2010 15:26 UTC (Mon) by mjthayer (guest, #39183) [Link]

> Also, you cannot rely on temporary files for that on X11, both applications need not be on the same machine, nor on the same network. Maybe they cannot see eachother directly, either.

You can also implement the clipboard as something purely local but proxyable, rather than coupling it so tightly with the window system. That would have the advantage that it can be more easily proxied in other situations than just the one you originally thought of (e.g. between virtual machines or different Synergy desktops). Just for fun, here is an imaginary non-X11-based clipboard API based on a well-known location in the filesystem (e.g. /var/spool/clipboard.tar).

* The application placing data in the clipboard creates a tar archive. The first file in the archive is a text file with a list of the MIME types provided, one per line. The other files contain the clipboard data in different formats, probably one file per MIME type and as few as possible (say one specialised format, one common one and a text fallback). The application atomically renames the tar file to /var/spool/clipboard.tar.

* An application which can handle clipboard data waits for file change notifications on /var/spool/clipboard.tar. When it gets one it reads the list of MIME types to see if it can handle one and if so enables its "paste" menu entry or whatever it does. When the user pastes data it reads the whole archive until it finds the format it can handle.

A couple of notes about the imaginary protocol:

* It is probably highly inefficient, although again copy and paste is not something you do several times a second.

* It could be integrated with the X11 clipboard protocol by a suitable X11 proxy client with the big caveat that the client would need to know about (though not to understand) lots of different data formats so that when an X11 client offers clipboard data the proxy client would know which formats are worth grabbing and which are not.

Simple is good enough

Posted Nov 14, 2010 17:06 UTC (Sun) by i3839 (guest, #31386) [Link]

Exactly, this is more or less the approach I was thinking about (high level, not the details. I don't see the need for tar, for instance). Have a simple copy and paste system which can be easily used by all applications, including network or X proxies and non-graphical programs.

Simplicity is more important than efficiency. The alternative would be to have some format negotiation and directly copying between programs when needed (for non-text formats), like X has. This is not worth the complexity.

All that's needed is a way to store mutiple formats and retrieve them, as wel as serialisation and notifications when things get copied (for proxies).

Simple is good enough

Posted Nov 14, 2010 21:00 UTC (Sun) by quotemstr (subscriber, #45331) [Link]

"Why does copying this image freeze the program for 30 seconds? Lol that doesnt happen under Windows."

That's what users will begin saying if your scheme is implemented and we don't have application-to-application copying. What X11 jacobins like you don't realize is that current design choices were made *for a reason* and shouldn't be lightly abandoned.

Simple is good enough

Posted Nov 15, 2010 10:10 UTC (Mon) by mjthayer (guest, #39183) [Link]

> "Why does copying this image freeze the program for 30 seconds? Lol that doesnt happen under Windows."

If that was also addressed to me, I will make an attempt at defending my proposal (consider the correction I added above as part of it).

* Clipboard data is written to a file on disk in order to share it, but due to disk caching that doesn't have to mean that the disk has to be a bottleneck (even disregarding the fact that /var/clipboard could be a tmpfs). This would need real life testing of course.
* Some latency is acceptable (ESR's estimate is 0.7 seconds - http://www.faqs.org/docs/artu/ch10s01.html). In this case we have additional room for manoeuvre, as we also have the time the user needs to switch from the copying to the pasting application.
* I will also point out that this proposal actually removes a potential source of latency (one that does occur in the wild) with the X11 selection protocol - when an application pastes X11 clipboard data it requires several rounds of communication between the two applications via the X server. If the application offering the data is currently busy the application pasting will often freeze until the data can be served. With the scheme I proposed the data will be available at once.

I realise that it might still not be workable despite all that, but I do think that there is a chance it might be.

Simple is good enough

Posted Nov 15, 2010 10:30 UTC (Mon) by mjthayer (guest, #39183) [Link]

>> "Why does copying this image freeze the program for 30 seconds? Lol that doesnt happen under Windows."

> If that was also addressed to me, I will make an attempt at defending my proposal (consider the correction I added above as part of it).

Replying to myself. One clear weakness of my proposal would be that it might not work well with select and middle button paste, as selecting is something you do more often than copying (again, that would need testing to be sure). I would give it more chances of working with drag and drop (which I personally prefer over middle button paste, but I greatly fear I am in a minority here with that).

Simple is good enough

Posted Nov 16, 2010 13:00 UTC (Tue) by i3839 (guest, #31386) [Link]

Not really. People probably don't want to copy and paste complex things
like images with select and middle mouse button. So if applications are
smart they only copy simple things that are quick to copy when selecting,
and only do the slow copy when users explicitly copy something.

Simple is good enough

Posted Nov 16, 2010 14:11 UTC (Tue) by mjthayer (guest, #39183) [Link]

> So if applications are smart they only copy simple things that are quick to copy when selecting, and only do the slow copy when users explicitly copy something.

Shouldn't applications be doing what the user asks them to rather than being smart? Selecting and middle click pasting an image works now. If the user selects it, should the application really assume that they don't want to paste it? Of course, it might still turn out that users don't select things often enough that the overhead would be a big issue.

Simple is good enough

Posted Nov 16, 2010 22:26 UTC (Tue) by i3839 (guest, #31386) [Link]

It depends on the program and context. If you select an image in a browser I wouldn't copy the image. But if it's an image editing program, I would.
But copying it into a hundred formats isn't something you should do for every selection of any random thing (or ever, but if you do...).

Simple is good enough

Posted Nov 16, 2010 22:42 UTC (Tue) by mjthayer (guest, #39183) [Link]

> It depends on the program and context. If you select an image in a browser I wouldn't copy the image. But if it's an image editing program, I would.

This does happen currently though (I tested it earlier today).

> But copying it into a hundred formats isn't something you should do for every selection of any random thing (or ever, but if you do...).

I changed that aspect of the proposal in a previous comment - in the new version a file containing a single mime type (with a well-known magic number) is saved to disk. To handle conversions, a (large) set of filters is installed on the system, and the application reading the clipboard must iterate through the installed filters to find ones which convert the file to a format it can use. This is roughly what BeOS did, which apparently worked well (or so I am told by a colleague who developed for BeOS).

Simple is good enough

Posted Nov 19, 2010 22:13 UTC (Fri) by i3839 (guest, #31386) [Link]

> I changed that aspect of the proposal in a previous comment - in the new
> version a file containing a single mime type (with a well-known magic
> number) is saved to disk. To handle conversions, a (large) set of filters
> is installed on the system, and the application reading the clipboard
> must iterate through the installed filters to find ones which convert the
> file to a format it can use. This is roughly what BeOS did, which
> apparently worked well (or so I am told by a colleague who developed for
> BeOS).

Well, the problem is that for complex types you can't easily convert from one to the other, because only the program doing the copying has all the info.

Example: If you copy a bit of a webpage, it can be either plain text, the raw HTML code, or formatted text depending on the style etc. Converting to plain text is almost always possible, but anything else doesn't really work. So a single mime type isn't always sufficient.

There are two sides to a copy and paste system: The program ABI to do the copying and pasting, and the system ABI how it's actually done. I think the latter shouldn't be set in stone, only the former, to keep the implementation flexible. So all copying and pasting should happen through the system installed copy&paste library, or the copy and paste programs (simple front-ends for the lib).

(I still haven't found to start implementing this, hopefully next week.)

Simple is good enough

Posted Nov 22, 2010 14:52 UTC (Mon) by mjthayer (guest, #39183) [Link]

> Well, the problem is that for complex types you can't easily convert from one to the other, because only the program doing the copying has all the info.

Actually the idea was that the application doing the copying provided the data in its native/internal format, which by definition should have all the information. It could always define an x- or a vnd. mime format for this and provide whatever converters it wanted to transform that data into other formats (they could probably double up as export filters too).

> Example: If you copy a bit of a webpage, it can be either plain text, the raw HTML code, or formatted text depending on the style etc. Converting to plain text is almost always possible, but anything else doesn't really work. So a single mime type isn't always sufficient.

In this case the native format is presumably "text/html", which should be convertable to either plain text or formatted text without the copying application even having to provide its own converters.

Simple is good enough

Posted Nov 25, 2010 21:44 UTC (Thu) by i3839 (guest, #31386) [Link]

Problem is that in the case of html, you generally lose the formatting information because that's not in the part you copied, but higher up or in a css file. So there is no native format, you don't want to copy raw html code into a word processor, nor the plain text, but something that more or less looks like what you copied. Not to mention that usually the selected part is "broken" html because not all tags are closed. So it's not that simple and I don't think it's safe to get rid of the list support.

For images and other data formats with an obvious raw format are much easier and better suited for automatic convertion. That can be done automatically without changing the API.

Simple is good enough

Posted Nov 25, 2010 21:56 UTC (Thu) by mjthayer (guest, #39183) [Link]

> Problem is that in the case of html, you generally lose the formatting information because that's not in the part you copied, but higher up or in a css file. So there is no native format, you don't want to copy raw html code into a word processor, nor the plain text, but something that more or less looks like what you copied.

Just for interest I copied some text in Firefox and ran my clipboard format viewer. Here are the results:

$ ../tmp/viewclipformats
Found clipboard format: TIMESTAMP
Found clipboard format: TARGETS
Found clipboard format: MULTIPLE
Found clipboard format: text/html
Found clipboard format: text/_moz_htmlcontext
Found clipboard format: text/_moz_htmlinfo
Found clipboard format: UTF8_STRING
Found clipboard format: COMPOUND_TEXT
Found clipboard format: TEXT
Found clipboard format: STRING
Found clipboard format: text/x-moz-url-priv

Without knowing, it wouldn't surprise me if one of those contained both the html and the formatting information, which I think should be feasible with my proposal too.

Simple is good enough

Posted Nov 26, 2010 22:26 UTC (Fri) by mjthayer (guest, #39183) [Link]

> Problem is that in the case of html, you generally lose the formatting information because that's not in the part you copied, but higher up or in a css file. So there is no native format, you don't want to copy raw html code into a word processor, nor the plain text, but something that more or less looks like what you copied. Not to mention that usually the selected part is "broken" html because not all tags are closed.

You also have to ask, when an application puts HTML data into the clipboard, what data it is actually putting there. When I select a section of text, pictures and whatever in Firefox and copy I get HTML data in the clipboard. But Firefox can't just put the source of the document from the point where the selection begins to the point where it ends into the clipboard, as it is announcing HTML data, and as you point out, that wouldn't be HTML, it would be broken HTML. So Firefox has no choice but to massage the HTML data anyway, and if it is doing that already, adding the style information inline is no great hardship.

Of course, if you want to reuse that data as is as HTML for some other web page then you are probably out of luck, but if you think of it that makes no sense anyway - if you want to do that you should probably be copying the source of the HTML as plain text. If you select and copy part of a page in Firefox, chances are that what you are actually about to do is to paste it either as plain text (the text visible on the page, not the HTML source) or as formatted text into e.g. OpenOffice.

And if you were copying the data inside some visual HTML editor, it would probably still not make sense for the editor to insert the data as naive HTML - chances are there would be no way to paste the data in any form resembling the source of the page the editor was generating, and in any case, if you were trying to get at the generated source it would make more sense to ask the editor directly than copying and pasting to get at it. In fact I would expect the visual editor to use some internal format which was not valid HTML at all when copying to the clipboard, but which another instance of the editor would know what to do with when pasting it. It might provide a filter to convert it to HTML, but not for the purposes of viewing the source - you don't use the clipboard for that - but rather as a stepping stone for converting it to OOXML or something else.

Hope that made sense, as I am rather short of sleep currently. I would really like to be clear that I am not trying to argue for the sake of arguing here, but rather because responding to the points you make forces me to think things through myself.

Simple is good enough

Posted Nov 27, 2010 10:39 UTC (Sat) by i3839 (guest, #31386) [Link]

> I would really like to be clear that I am not trying to argue for the
> sake of arguing here, but rather because responding to the points you
> make forces me to think things through myself.

Same here, we're trying to figure out if a list of formats is really needed, or if always providing only one and having convertors is sufficient. This choice determines the API, so it's pretty important.

Only having one format and providing convertors is simpler, but less complete. My main concern is that it's not always sufficient, or that it makes implementing copy harder than necessary for some applications, because they have to create one "complete" format and convertors.

Another concern is that you convert from simple->complex->simple, when also supporting a complex type, hoping that the "simple" in the end is the same as what you started with. So the unrelated complex type makes simple types more complex too, with too much room for errors in my opinion. Or in other words, copying simple types is not simple anymore, if you also copy a complex one.

Lastly, I don't really see a way to support multiple types when pasting. It should be the pasting program's decision what type to paste, if it supports multiple types. I don't see another way than supporting a list of types in the pasting API anyway, and then you can as well support lists in the copy API too.

I think you make too many assumptions about what the user or pasting program expects in your line of thinking.

All in all I think the automatic convertion idea is good, but not always sufficient. Combined with today's multiple format support in applications, I think it's best to support multiple formats, but to encourage convertor usage when possible.

Then when someone pastes something the lists are compared, and if they have no common format, a convertor is used.

A list of formats is basically "more of the same", so I think the added complexity, both for the API and implementation, is small enough.

Now we just have to find some time to implement this. I think I'll give it a stab next week. I'll keep you informed (my email address is indan@nul.nu).

Simple is good enough

Posted Dec 9, 2010 19:16 UTC (Thu) by Lestibournes (guest, #71790) [Link]

Maybe something like this will work:
1. Program A indicates that it is ready to supply data by writing its identifier to clipboard/source.
2. Program B requests the data by writing its identifier to clipboard/destination.
3. Program A writes the data files in clipboard/data.
4. Program A indicates that it finished writing the data by erasing the content of clipboard/source and clipboard/destination.
5. Program B reads the data files from clipboard/data.

If no one requests the data from Program A, then it will still dump the data when it terminates. The only weaknesses I detect are a delay when the Paste operation is performed, and that the data will be lost if Program A crashes. There should be a separate clipboard folder for each session to avoid conflicts such as two users who share an account and override each other's Copy operations.

Simple is good enough

Posted Nov 15, 2010 9:51 UTC (Mon) by mjthayer (guest, #39183) [Link]

> Exactly, this is more or less the approach I was thinking about (high level, not the details. I don't see the need for tar, for instance).

I have been turning this over in my head since I posted that and have refined it somewhat. Rather than creating a tar file it should be enough to create a standard file in a known mime format with a usable magic number (note that the opendesktop.org mime specification allows for adding custom mime formats complete with magic numbers to a system). This would of course mean that text data would also need to be preceded by a mime tag, which I recognise makes the system slightly uglier, but also more reliable. Conversions could be handled in the BeOS way, which is to register conversion filters with the system (shared objects or just executable filters - in the end they need to be loaded from disk one way or another, so it might not make much difference). An application pasting clipboard data of a mime type it couldn't handle could enumerate the available filters to see if one of them helped.

An X11 application using this clipboard should of course start a proxy application - preferably a well known singleton application - which would keep the clipboard in sync with the traditional X11 one. This would allow for painless transition (I say transition, but there is no reason this sort of compatibility shouldn't be maintained for ever). Perhaps (although unlikely) I will try to get this working some time. I already maintain an X11 clipboard proxy tool that could be adapted to do the job.

You must be joking

Posted Nov 7, 2010 5:14 UTC (Sun) by drag (guest, #31333) [Link]

> If you instead put the type information in the metadata, you basically have the system we have today *plus* the complexity of having to manage this temporary file and copy it across machines. Preemptively? On demand? Over what transport? Do transfers block the GUI?

If copying non-text data you just have to use a URL to reference the resource.
http://blah/
file://blah/
sftp://blah/

Etc.

Then you let the application figure it out. If it does not know how to handle the file then it can display a error or just paste the text link or something.

This is were things like GVFS come in handy. Apps that support GVFS it's easy to do this... you just let GVFS handle the details about the connection to the service and then have the program figure out what to do with the file.

For non-gvfs programs you can theoretically just pass the URL through /home/blah/.gvfs/etc/etc/ and expose GVFS through FUSE.

You must be joking

Posted Nov 7, 2010 13:07 UTC (Sun) by jond (subscriber, #37669) [Link]

Last I checked you can copy and past multiple files at once.

You must be joking

Posted Nov 7, 2010 3:51 UTC (Sun) by zander76 (guest, #6889) [Link]

Hmm, it does *seem* like an easy problem to solve. Heres what was my first thought and I am sure I am missing something :)

If an application knows how to copy and paste to itself then it already knows a fair about about what it is copy and pasting. If I was going to write a copy/paste function in an image editor lets say, then it would obviously know about the bits but perhaps I could attach some header information like image type.

Ones I was that far then it would seem to me that I could just copy that information into a global queue. On paste I could then pass the structure to the application. It would be up to the application to deal with images, text or whatever else.

It doesn't *seem* that difficult. You got me, what am I missing? I am kind of curious now.

Thanks

You must be joking

Posted Nov 7, 2010 4:06 UTC (Sun) by dlang (guest, #313) [Link]

this sounds like it would be a perfect article for lwn, how a problem can see so simple and when you dig down it gets to be so hard.

the person up thread who took his class through the process of trying to specify (not even implement) the correct behavior in all conditions probably has a lot of info that would be very interesting and educational if he could take the time to write it up (especially if the writeup can talk about a lot of the dead-ends that seem so attractive)

You must be joking

Posted Nov 7, 2010 4:30 UTC (Sun) by zander76 (guest, #6889) [Link]

Someone once asked me "What is a camel?". The answer was "An over designed mouse created by a committee!". It take a lot of *what ifs* to get from a mouse to a camel.

Students tend to under or over design things. To loosely quote Linus "It's simple to make things complex and complex to make things simple". It does take a fair amount of experience to hit that middle ground.

It is very easy to make a complicated mess especially when you are trying to make everybody in the world happy and address every problem before you start. You will never get passed the design phase.

Now don't get me wrong I am not stating that this is the case. I am simply stating that I could see how this could get way out of control. This is especially true when with students trying to account for every problem. In the working world you tend to spend less time on use cases and more time on getting the job done so you can go home :)

You must be joking

Posted Nov 7, 2010 4:55 UTC (Sun) by PO8 (guest, #41661) [Link]

I've put some initial notes [on my blog](http://fob.po8.org/node/512). I'm not sure how many of the "dead-ends" I've covered, but I've at least tried to describe some reasons why the problem is so hard.

Hope this helps.

You must be joking

Posted Nov 7, 2010 7:53 UTC (Sun) by dlang (guest, #313) [Link]

from my naive point of view, the obvious answer to some of the problems on X would be to merge the two existing mechanisms, not eliminating either of them, but just having them both use the same clipboard storage so no matter how you copy something, either approach to pasting will produce the same result.

I've only been using linux as my primary desktop for 13 years or so, so I'm abit of a novice in understanding all the nuances of why the two are separate in the first place or how to know when each is being used :-)

You must be joking

Posted Nov 8, 2010 6:08 UTC (Mon) by mfedyk (guest, #55303) [Link]

if you merged the two clipboards, you would have the problem of overwriting the current clipboard contents by accidentally selecting something with the mouse.

I like being able to copy and paste with just the mouse and not having to go through any menus to do so. I end up cursing the lack of that feature whenever I'm on another platform

You must be joking

Posted Nov 8, 2010 7:16 UTC (Mon) by dlang (guest, #313) [Link]

the question is would that be worse than the confusion of having two clipboards and the inconsistent way that applications use one or the other?

You must be joking

Posted Nov 8, 2010 15:43 UTC (Mon) by foom (subscriber, #14868) [Link]

If you're using the menus to copy&paste on other platforms, you're doing it wrong. Use the keyboard shortcuts...

You must be joking

Posted Nov 8, 2010 21:39 UTC (Mon) by mfedyk (guest, #55303) [Link]

> If you're using the menus to copy&paste on other platforms, you're doing it wrong. Use the keyboard shortcuts...

Note the "with just the mouse" part...

I use the CLI all the time, but sometimes I don't have a free hand to reach for the keyboard (using screwdriver, using someone else's computer to instruct them, etc.) and just want to do something quickly with the mouse only.

Mouse selection and middle click paste

Posted Nov 11, 2010 16:10 UTC (Thu) by cdmiller (guest, #2813) [Link]

I too use the mouse selection/paste extensively day to day. In fact it is seen as a "killer feature" by many folks when I'm convincing them to try a Linux desktop. Occasionally an application intentionally break this functionality. I remove those from my desktop and our computer labs whenever possible...

You must be joking

Posted Nov 8, 2010 13:32 UTC (Mon) by tialaramex (subscriber, #21167) [Link]

There aren't two clipboards.

X has these things called selections. How they are used is defined outside X, you could run an X server and a suite of apps that had no selections, or had a dozen all named after capital cities. X does not care.

For interoperability you need to agree how to use them. The ICCCM provided a good enough description for its day, but apparently in the late 1990s reading comprehension among software developers declined, and Qt managed to screw it up repeatedly, so there is also a FD.O document which spells it out.

So, this names selections including PRIMARY and CLIPBOARD. The PRIMARY selection is to be set to whatever the user last explicitly selected. By convention apps ask for the contents of this selection and insert it when you press middle-button.

The CLIPBOARD selection is maintained separately by explicit cut or copy operations. Most apps ask for the contents of this selection when you use their paste operation.

Selections are also used in drag-and-drop functionality. They have several features that the average My First Clipboard idea doesn't handle

• Low overhead. Rather than storing whatever you select or cut into some OS-provided "clipboard" where it will mysteriously waste a lot of RAM*, the selections exist only in the source application until needed. X just tracks a window ID and a timestamp.
• Content-negotiated. Rather than forcing everything into a lowest common denominator like plain text, the source app can offer various formats and the recipient chooses
• Network transparent. So long as you actually do it with X (rather than sending a filename as per some suggestions in this thread) you get network transparency. Copy from the remote xterm, paste into the local web browser.

Really the worst problem is that application developers don't care. They refuse to "pay their taxes" as it has been called, by implementing features that require some work on their part to deliver a better experience for the end user across all applications. This isn't just about the clipboard, it's a widespread problem. They may hard-code a date format that annoys non-Americans, or misbehave when multiple monitors are used, or any number of things. And it's not just on X, this is a problem on every platform, only the specifics vary.

I really mean it about them not caring. For a while I filed bugs against apps that got this stuff wrong. But the response was almost always hostile.

* A lot of designs don't consider this. Users expect that somehow the computer "knows" that they intended to just throw away the 50MB of charts they just cut from a document, but they needed to keep the 15MB image they cut the next day. There are no ultimate solutions here, like window focus it's a matter of best effort.

You must be joking

Posted Nov 18, 2010 9:45 UTC (Thu) by renox (guest, #23785) [Link]

>> Really the worst problem is that application developers don't care. They refuse to "pay their taxes" as it has been called, by implementing features that require some work on their part to deliver a better experience for the end user across all applications <<

Which is why, I always look at propositions which want to replace things done by a common server by things done by all the applications with a lot of skepticism..

One of the example is XCB which AFAIK is still not implemented by Qt or GTK even though it was supposed to allow better threading.

You must be joking

Posted Nov 7, 2010 11:20 UTC (Sun) by quotemstr (subscriber, #45331) [Link]

Thanks for taking the time to write that up. It's a great summary of the issues that make the problem an involved one.

You must be joking

Posted Nov 7, 2010 13:58 UTC (Sun) by foom (subscriber, #14868) [Link]

You wrote:

> The best means we have for identifying media type right now is MIME-types. Unfortunately, they are really too incomplete and disorganized for CCP purposes. Their ontology is only two levels deep and highly incomplete.

You might be interested to check out Apple's solution to that problem, Uniform Type Identifiers (first introduced in OSX 10.4, it's been slowly introduced into more and more data transfer APIs). They are multi-level hierarchical, where the hierarchy is defined outside of the type-name.

http://developer.apple.com/library/mac/documentation/File...

Also see the "Pasteboard Concepts" article.

http://developer.apple.com/library/mac/documentation/Coco...

Apple Pasteboard and UTIs

Posted Nov 8, 2010 7:52 UTC (Mon) by PO8 (guest, #41661) [Link]

Interesting! Thanks much for the pointers.

You must be joking

Posted Nov 7, 2010 7:02 UTC (Sun) by mfedyk (guest, #55303) [Link]

+ 1

yes, very much looking forward to this article.

You must be joking

Posted Nov 7, 2010 4:08 UTC (Sun) by dlang (guest, #313) [Link]

a simple example of how you can run into problems.

think about doing a text cut-n-paste between two terminal windows of different widths where the source is multiple lines, some of which wrap.

many windows apps do this wrong, most *nix apps handle this correctly.

You must be joking

Posted Nov 7, 2010 4:13 UTC (Sun) by zander76 (guest, #6889) [Link]

Yeah, windows is really annoying with how they do it. They simply encode return characters at the end of every line in the terminal and not the actual return characters in the terminal text.

You must be joking

Posted Nov 7, 2010 4:35 UTC (Sun) by zander76 (guest, #6889) [Link]

The question leads me back to my original question which is "Does the application know how to copy and paste to itself". In the windows case it doesn't encode the text correctly right from the very beginning and has nothing to do with the system wide copy/paste function.

You must be joking

Posted Nov 7, 2010 4:58 UTC (Sun) by drag (guest, #31333) [Link]

I am sure that everybody here is aware of this, but I just like to point it out time to time to remind everybody. It's like trying to explain how Windows 98 sucked to people that used Windows 98 for 7 years or more... they often don't realize how crappy something is because they have lived with it and avoiding it's problems has become second nature.

In Linux on Gnome:

1. Open up Gnome-terminal. Highlight some text and right click, select copy. Highlight some other text.
2. Close out Gnome-terminal
3. Open up Gedit.
4. Middleclick to paste, then right click paste.

Notice how you have 2 copy-n-paste buffers. Highlighting text makes the first buffer go away and be replaced by new text. This makes it almost entirely worthless for anything except working with a terminal. Using middlelick copy is one of those habits I wish I can break myself from doing.

The second buffer works in a sane way.

Almost.

5. Open up firefox. Highlight some text, Right click copy.
6. Close out firefox.
7. Attempt to paste text into Gedit.

Notice something wrong?

The only people that get it right are Gnome apps that religiously follow the HIG. Probably KDE-only apps get it right, too. Everybody else gets it wrong almost every single time. Some apps will clear everything out every time. Some apps will override one buffer with another in order to be helpful. All sorts of really weird and crappy behavior.

It's something that is fundamentally broken with X and has been since the beginning of time. I don't think there is any sane way to fix it as nobody has been able to do so. Even with aggressive clipboard managers it's still a bit hit or miss if it works sometimes.

You must be joking

Posted Nov 7, 2010 5:05 UTC (Sun) by dlang (guest, #313) [Link]

the ability to highlight text and then middle click to paste is one of the things that I love about linux, not just when working with terminals (I paste things from webforms, pdfs, etc into webforms and applications all the time.

I do get annoyed once in a while by apps that use the second clipboard, and I've never taken the time to figure out the difference between the two. but for the most part I find that if just highlighting doesn't work, shift + highlighting almost always does.

You must be joking

Posted Nov 7, 2010 13:51 UTC (Sun) by drag (guest, #31333) [Link]

Yeah. I used to feel that way also. But since then I've changed my mind. Using middleclick is impossible to use if you want to perform a 'replace' on some text.

A simple example is try pasting a URL into your browser bar, at pretty much any point without opening a extra tag. Also many many times I'll lose my buffer by simply clicking on a terminal window and accidentally highlighting some whitespace or some tiny portion of text.

The advantage to the second buffer is mainly that you control when things are inserted. With a primary, traditional, X copy buffer it is often wiped out many times during the course of normal text manipulation in a GUI.

I wouldn't mind having 2 buffers at all except that how applications and X handles these buffers is broken. It's very inconsistent so you either have to learn how each and every application you typically use is going to behave, or you just end up having to copy stuff multiple times quite often.

You must be joking

Posted Nov 8, 2010 2:46 UTC (Mon) by madscientist (subscriber, #16861) [Link]

Yes but the answer to the browser bar problem is NOT to destroy the incredibly useful cut/paste behavior of traditional X. Rather, it's simply to have a button on the browser bar that will clear the @#$%& text when you press it, without requiring you to select the text first. How hard is that? I can't believe it's 2010 and we still don't have that as a default part of the browser.

You must be joking

Posted Nov 8, 2010 3:39 UTC (Mon) by dlang (guest, #313) [Link]

my work-around is to just open a new tab to paste the URL into (and a history of doing this is why I have a couple hundred tabs open :-)

You must be joking

Posted Nov 8, 2010 7:07 UTC (Mon) by mp (subscriber, #5615) [Link]

In Firefox middle-clicking anywhere in the window opens the URL from the selection, no need to open new tabs and aim at the tiny address bar.

You must be joking

Posted Nov 8, 2010 7:14 UTC (Mon) by dlang (guest, #313) [Link]

that's a configurable option. I don't remember when I first ran into it, but I hunted down how to disable it and have happily used the middle mouse button to open a link in a new tab instead (I got _so_ annoyed at slightly missing a link or text bar and the entire page disappearing on my when firefos opened some random URL that it thought I wanted to go to, or a search for the text that happened to be in the clipboard)

You must be joking

Posted Nov 8, 2010 3:54 UTC (Mon) by sfeam (subscriber, #2841) [Link]

a button on the browser bar that will clear the @#$% text when you press it
Konqueror has this, and I love it. That, the built-in site filtering, and the filebrowser are enough to keep me using konqueror rather than firefox.

You must be joking

Posted Nov 8, 2010 15:53 UTC (Mon) by jackb (guest, #41909) [Link]

There is a Firefox addon called Clear Fields that adds the functionality you are looking for.

You must be joking

Posted Nov 8, 2010 8:56 UTC (Mon) by nicooo (guest, #69134) [Link]

I works fine with opera. X can't be blamed for people designing broken programs.

You must be joking

Posted Nov 9, 2010 5:20 UTC (Tue) by njs (subscriber, #40338) [Link]

AFAICT Firefox has an interesting tweak to this -- Control-L selects the text in the URL bar, *without* claiming the PRIMARY selection (which selecting the same text with the mouse would do). So to paste the PRIMARY selection into the current tab's URL bar, one can use the sequence: Control-L, backspace, middle-click.

You must be joking

Posted Nov 14, 2010 4:57 UTC (Sun) by tnoo (subscriber, #20427) [Link]

... which is exactly why I hate using FF. Konqueror has a "delete" button to rub out the old URL.

You must be joking

Posted Nov 14, 2010 6:10 UTC (Sun) by bronson (subscriber, #4806) [Link]

In FF just middle-click the favicon to the left of the URL. Instead of taking 2 clicks, it takes a single click.

Seems a rather weak reason to hate a browser.

You must be joking

Posted Nov 15, 2010 16:24 UTC (Mon) by wookey (guest, #5501) [Link]

ctrl-U used to provide this vital feature and then some eejit decided that that in a browser that should open a window with the page source in it. That was _such_ a painful decision and still enrages me on a daily basis on machines where I haven't persuaded the system to change the keybinding. (where did it come from?).

Ctrl-U can still be made to work (as 'clear line/box'), but it gets harder to find the rune every year. A button to prod for the same function would indeed be a useful alternative.

Like many here I find middle-button paste to be one of the finest things about GNU/Linux, and it's extremely tiresome when you get apps that don't do it right. I really hope it does not get sacrificed as part of the GUI re-architecting that it looks like we are headed for.

I use remote-X-over-ssh for graphical apps fairly regularly and it's extremely useful, but accept the argument that we can achieve much the same effect by other means (SPICE/VNC/NX/whatever). I hope that does indeed come to pass.

A similar button does exist.

Posted Nov 18, 2010 3:52 UTC (Thu) by gmatht (guest, #58961) [Link]

In Google-Chrome middle-click the (+) new tab button to open a new window with that URL, this also works for searches.

With Firefox, middle-click the icon to the left of the icon bar. However this does not work for searching for non-urls; see the patch to implement middle-clicking on the search icon at: https://bugzilla.mozilla.org/show_bug.cgi?id=414849. I would be particularly interested to know if you would find the functionality implemented by this patch useful.

LPC: Life after X

Posted Nov 6, 2010 18:14 UTC (Sat) by drag (guest, #31333) [Link]

The way copy-n-paste works now on the Linux desktop is one of the worst features possible that users are exposed on Linux to on a regular basis. It's a chronically shit problem.

Hell if switching to Wayland means getting rid of the way X handles copy-n-paste buffers then that alone would be worth sacrificing X networking.

LPC: Life after X

Posted Nov 9, 2010 13:29 UTC (Tue) by nye (guest, #51576) [Link]

I disagree. I've been using Windows exclusively for a year now, and there are three things in particular that still drive me absolutely up the wall.
One of them is that I keep constantly trying to paste the primary selection before remembering that it doesn't exist. I don't think there's been a single day when I've not wished for that behaviour (there are programs that attempt to implement it, but it seems it's not really possible to to correctly).

(The other two are both related to window management; the design that has windows performing their own window management is, to put it politely, completely brain damaged.)

LPC: Life after X

Posted Nov 7, 2010 19:58 UTC (Sun) by iabervon (subscriber, #722) [Link]

Surely the right design in a Unix-style system is: open /dev/clipboard, call some ioctls to interact with type and format information, and use mmap or splice to move the data. Tying the clipboard to the windowing system is as bad an idea as tying printing to the windowing system; it's just that it's convenient.

(Note, of course, that /dev/clipboard is a symlink to /proc/self/session/clipboard, which is a pseudo-device provided by the session manager for the session that a task belongs to, and the session manager's implementation of this device is responsible for handling exchanges like applications that see that the current clipboard content is a jpeg image, but ask for a bitmap, or see that something is formatted text and may ask for rtf or plain text. Also that the session manager is probably only acting as a directory for the clipboard manager to make itself known to clients. And I'm not going to get into how the clipboard manager needs to work, because I know that I don't know.)

Put another way: ICCCM is a de facto standard for this functionality and is an X standard, but the functionality actually has nothing to do with X. Any change away from X means that clients (or their toolkits) will have to change. Once they're changing, the right step is to do something that isn't particular to (e.g.) Wayland, because I want to copy/paste a URL from my TV to the IM client on my laptop by using my Android phone as a remote control (without interfering with the experience of the other people watching the TV, thanks). Obviously, that isn't going to just work immediately, but, when going to a new system for copy/paste, there shouldn't be any design limitations preventing it.

LPC: Life after X

Posted Nov 6, 2010 16:40 UTC (Sat) by gcarrier (guest, #36301) [Link]

SPICE can work for remote control. I also don't see why we couldn't have a network protocol handled in the toolkit(s) alongside with local Wayland. I don't see any problem that couldn't be solved in an elegant manner.

However I want a dwm-like window management. (Most) clients should never, ever require to handle their positions themselves, or to draw their own decorations. If they offer to, and if it is the user preference, that's fine.
But I personally want to live in my nerdy ghetto, with my mostly consistant environment, and with the window management I deem usable.

New toolkits

Posted Nov 6, 2010 16:46 UTC (Sat) by rleigh (guest, #14622) [Link]

While I find work such as Wayland very exciting and a logical step in supporting modern graphics hardware, I do wonder if we can take proper advantage of it with contemporary GUI toolkits.

While GTK+ is nominally portable, it's really just a thin abstraction over XLib; all the widgets are deeply tied into using X internals (GdkEvent/XEvent event handling, expose events and partial redraws etc., for example) which would be mostly redundant in an OpenGL context where widgets would be represented in a scene graph. While I'm sure it could be converted to use OpenGL, I do wonder how efficient it would be compared with writing a new toolkit from the ground up which used abstractions which map directly to what OpenGL can support efficiently, since [X]Window/GdkWindow drawing is basically tied to 2D rectangles. I understand Qt is part way there already with this, but I don't have the same experience with its internals.

New toolkits

Posted Nov 6, 2010 18:17 UTC (Sat) by drag (guest, #31333) [Link]

GTK is portable. It's already running on Wayland and a number of other platforms.

I don't see how GTK is 'a thin abstraction over Xlib'. That does not make any sense at all given how even on X you can use GTK on XCB instead of Xlib.

New toolkits

Posted Nov 6, 2010 18:47 UTC (Sat) by rleigh (guest, #14622) [Link]

It's a direct consequence of its interface. Take the main base class of the GTK+ widget hierarchy, GtkWidget. One of the key object signals is "event" which has a single parameter, GdkEvent. The GdkEvent structure is a wrapper around the XLib XEvent structure.

While it's obviously a portable toolkit, all the other backends retain this interface. That is, the underlying design principles and constraints of XLib pervade the GTK+ toolkit, and cannot be removed without breaking pretty much all existing applications. So the Wayland backend (and all other backends) presumably need to synthesise "fake" GdkEvents which must by necessity be directly compatible with the XEvents which GtkEvent wraps. So irrespective of the backend, you're using a wrapper around the XLib API as soon as you use Gdk. Gdk doesn't even attempt to abstract many of the XLib data structures; it's pretty much a direct 1:1 mapping.

I am by no means questioning whether GTK+ (and other X toolkits) can be ported. I am just questioning whether the constraints upon their design as a result of their X legacy will result in a rather inefficient implementation compared with implementing a new toolkit which does not need to inherit this legacy.

Regards,
Roger

LPC: Life after X

Posted Nov 6, 2010 19:48 UTC (Sat) by zander76 (guest, #6889) [Link]

Interesting read.

X is a mammoth beast. It is quite amazing really, especially when you consider it has been around for 25 years.

The Unix/Linux philosophy is do one thing well. If you consider "everything" to be "one" thing then x does that very well.

It is quite interesting to think about x in the terms of if I was going to start again today then how would I tackle this issue.

The first thought would be to address each task as individual projects. With smaller projects each piece can be replaced with something new. The best pieces float to the top type concept.

What level would I address each piece? Would I put networking and window creating on the same level. Is this something that everybody needs, or perhaps it would be better have separate window and networking libraries that are tied together on a higher level.

I am not trying to say I know something because I don't. I simply think the article is interesting and its interesting to think of Life after X.

As a game developer I am not interested in X development but windowing, input, drivers, etc.. are places that I am. As an example Input is a very important area and game developers tend to duplicate the efforts of x and create there own input layers.

Ben

LPC: Life after X

Posted Nov 7, 2010 11:56 UTC (Sun) by alankila (guest, #47141) [Link]

There is a saying that if you put 2 teams to work on a compiler, you get a 2-pass compiler. In this case the architecture gets a redesign: it may be impossible to just drop a new piece in place of old piece, because there is no slot anymore where to drop the new piece.

Wayland will not do network transparency, because the applications apparently must have a direct access to the underlying framebuffer. Therefore the displaying parts must run locally. I personally support this, because X is really pretty inefficient when it comes to actually getting pixels on the display: you can't seem to escape blitting and conversions with X, no matter how much you try.

On the other hand, network transparency would then have to be done on toolkit level if it's really wanted: the app would run remotely and send toolkit-specific protocol over ssh pipe instead of drawing commands or textures. Apparently it would be something like this in practice: "Put a button on coordinates x, y, size dx, dy and text z. Give it a gradient and rounded borders of 2 pixel radius. Here's the colors for the gradient. Send me the event 'foo-clicked' when user pushes it." The local application could probably be some kind of generic "gtkd" or "qtd" that would know how to construct the UI in response to commands like that.

LPC: Life after X

Posted Nov 7, 2010 14:16 UTC (Sun) by drag (guest, #31333) [Link]

Possibly. There are other options.

Spice, right now, works on the 'hardware level' in a virtualized environment. You install paravirtualized drivers in Windows or Linux KVM Guest and then they work with the Qemu software to do a very effect remote access. I am guessing that similar things could be possible with just running a special virtual driver on a non-virtualized host. Like how you can have virtual audio cards for networking or bluetooth audio and stuff like that. But just for remote access.

Or you can just do screen capture. There are now, amazingly, very good screen recording software for Linux. You know, for making demos or little youtube videos on how awesome Compiz or whatever. This is quite able to run a 1440x900 15FPS screen capture on very modest hardware without breaking a sweat. Almost no noticable impact. Of course it's using very inefficient, in terms of compression, recording method. A optimized form of WebM or MJPEG or something would probably be good. You can capture individual applications also. What you need then is just some sort of proxy that is capable of sending your keyboard and pointer inputs back to the original application. You should be able to do individual applications that way also.

So there are a few different ways.

I'd really hate to have to have it handled on the toolkit level. I know that it would probably be the cleanest, but I dislike it just because the chances of multiple toolkits being able to do it in a correct and consistant manner is just about nil.

LPC: Life after X

Posted Nov 7, 2010 21:29 UTC (Sun) by zander76 (guest, #6889) [Link]

It certainly opens up a lot of options. As he points out in the article you could literally drive networking with anything. Take a step back from the original concept of how x does its network and things get really interesting.

As much as I hate this I will describe it anyway. You could drive an application using with a web service. HTML and json driving the application. Sad part is that with browser caching this could actually be somewhat efficient.

Braking it apart would certainly put a lot more load on the side of distributer when choosing the pieces to put together but hey they make lots of money anyway :) (joke)

Ben

LPC: Life after X

Posted Nov 7, 2010 21:51 UTC (Sun) by dlang (guest, #313) [Link]

one funny thing is that even if you limit yourself to running graphics locally, you should still design your graphics system with similar concerns in mind. it's not efficient to have your main CPU do all the work and pass bitmaps to the video card, besides not taking advantage of the GPU at all, the buss between the CPU and the video card will be the bottleneck. As a result, even for local graphics, the main cpu should be describing what it wants to be displayed and sending that description to the video card rather than just sending the image to be isplayed.

moving the video card to another machine (potentially in another office) changes the speed of this internonnect, and adds increased latency to the connection, but the fundamental problem that you have a low bandwidth link between the CPU and the display is there no matter what.

in terms of bandwidth, the network connection between two machines in the same office is faster than the connection from theCPU to the video card was a few years ago (PCI vs Gig-E)

LPC: Life after X

Posted Nov 8, 2010 2:31 UTC (Mon) by drag (guest, #31333) [Link]

> As a result, even for local graphics, the main cpu should be describing what it wants to be displayed and sending that description to the video card rather than just sending the image to be isplayed.

This, on a side note, is also why often when you're doing benchmarks of graphical toolkits the software rendering of X ends up being faster then the hardware accelerated version it's still advantageousness to use the GPU rendering IF you can do the majority of the rendering on the GPU. Certain toolkit microbenchmarks will often show that CPU rendering is faster in some things then GPU rendering. The GPU just sucks at certain things, but ideally you want to use the GPU 100% of the time to avoid the multiple trips over the PCI Express buss. Each time you have to send texture data across the buss your just burning hundreds of thousands of GPU and CPU cycles just waiting for data to be pushed over.

you can imagine the huge penalty you have if you do, say, text rendering in software, but do the rest on the GPU. Even if the GPU rendering was a dozen times slower, in the real world GPU will still win.

Luckily AMD and Intel are trying to simplify things quite a bit with putting CPUs and GPUs on the same hunk of silicon. No need to flush textures back and forth if your sharing the same memory. :) But even then making proper use of the GPU with software will yield huge improvements in efficiency and performance.

LPC: Life after X

Posted Nov 8, 2010 14:02 UTC (Mon) by nix (subscriber, #2304) [Link]

This is just a re-expression of a problem graphics developers have had for fifteen years or more, ever since video cards started getting dedicated RAM with relatively access latencies from the CPU: you can store stuff in VRAM and it's really fast to manipulate with the GPU and to display, or you can store it in main memory and bash it with the CPU and it's much slower to display, but if you mix the two, you get incredible sloth. Back when the offscreen pixmap cache didn't have a defragmenter (pre X.org 1.6.0) it tended to get too fragmented to put any useful amount of text into... and text scrolling, on a Radeon 9250, then took several *seconds* per screen. That was entirely because of repeated CPU<->VRAM data transfers. They're *slow*.

(Of course, KMS should fix all of this by giving us a proper memory manager for VRAM-plus-main-memory. It doesn't seem to be there yet, though: I still see occasional scrolling slowdowns when the pixmap cache gets too fragmented and the defragmenter hasn't kicked in yet.)

LPC: Life after X

Posted Nov 7, 2010 19:06 UTC (Sun) by jond (subscriber, #37669) [Link]

Since GUI developers are encouraged to use qtdesigner and glade and similar, the latter of which spits out a GUI description in XML which is interpreted in runtime, we aren't far from something like http/HTML as a suitable GUI description language over a network.

LPC: Life after X

Posted Nov 7, 2010 19:20 UTC (Sun) by iabervon (subscriber, #722) [Link]

Since this comes up during the "Lessons from Unix" series, it seems to me like the obvious right solution, now that we've got the hardware for it, is to say that applications open a pseudo-terminal device, which provides input events and has a frame buffer which supports various elaborate operations. But these aren't hardware devices any more than pts/1 is; the input events are only those directed to the application that got the pseudo-terminal, and the frame buffer is only the application's window, and the kernel is providing an abstraction layer and hiding what is on the other side. It may be that there is a userspace program which is compositing the windows onto the hardware frame buffer; then again, the windows might be hardware texture maps, and a userspace program has simply arranged the hardware's scene graph to have these textures get rendered. Or maybe the framebuffer device is proxying everything over a network connection.

But I think the important magic is really having OpenGL as system calls on a file descriptor for a graphical pts device; even though OpenGL in the kernel is a terrible idea, OpenGL over a channel that the kernel is responsible for is a great idea, and even better if the "device" side of the channel can tell the hardware driver to snoop and handle stuff that the hardware supports directly.

LPC: Life after X

Posted Nov 7, 2010 21:42 UTC (Sun) by zander76 (guest, #6889) [Link]

Hey,

It seems like this solution is a little to close to the metal. I would be more inclined to deal with the actual event and not the keystroke. The data would be my concern and leave control of mouse/keyboards in the hands of the client itself. Perhaps I am missing a use case that would require that level.

Even in real time gaming I have graphics cached on the client and I only send information like position and direction type information to the server. The individual keystrokes and mouse events are handled completely on the client side. The results of their events are sent along the wire.

Ben

LPC: Life after X

Posted Nov 7, 2010 23:22 UTC (Sun) by iabervon (subscriber, #722) [Link]

I think you're misunderstanding my proposed architecture; there's a daemon on the computer with the keyboard that decides which application should see a key press event when the user presses a key on that keyboard. That daemon may also provide keystroke events when the user interacts with an on-screen picture of a keyboard and omit keystrokes that have been configured to control the mouse pointer. For some touchpads, the daemon would map between what the hardware outputs (two fingers moving in a particular way) and what that gesture means (scroll up), so the touchpad behavior is consistent across applications.

If the application is remote, the devices it has opened on the machine it is running on simply proxy the devices that an application on the machine with the hardware would have opened; all of the "what does this particular output from this particular hardware" is dealt with on the machine with that hardware.

I think, in fact, that we agree on what should be done where, but the terms "client" and "server" are somewhat unclear in the context of online gaming in X; the "client" is the application, but there are things called "servers" both closer to the user and further away from the user, and all three programs depend on operating system services.

LPC: Life after X

Posted Nov 7, 2010 22:00 UTC (Sun) by dtlin (subscriber, #36537) [Link]

Speaking of "Lessons from Unix"…

Plan 9 provided /dev/cons, /dev/mouse, and /dev/draw devices for keyboard input, mouse input, and screen RPCs. The 8½ (later rio) window manager would rebind these devices for each process opening a new window (plus a few others for window management) and would act as a multiplexer. Network transparency becomes just mounting a filesystem remotely.

Sounds not too far off of what you're suggesting here.

LPC: Life after X

Posted Nov 8, 2010 10:45 UTC (Mon) by marcH (subscriber, #57642) [Link]

> Keith asked: how many of these applications care about network transparency, which was one of the original headline features of X?

Of course applications do not care about network transparency: this is exactly what "transparency" means! Hearing such a weird question from such a high-profile developer is quite scary. Especially when he works at the company which made my laptop GPU barely usable in all the recent releases.

*Users* care about network transparency. Now a question that makes sense is: how many? (and it has probably been debated enough already)

LPC: Life after X

Posted Nov 8, 2010 16:19 UTC (Mon) by foom (subscriber, #14868) [Link]

Except, as many people have pointed out already: if X applications don't care about working remotely, then they *DONT*, because they do too many round-trips, which becomes deadly over a network with more than a few ms RTT. I have tried to use X remotely a few times, but it simply isn't usable for most things other than emacs and xterm.

The "transparency" feature of X needs a lot of work...

LPC: Life after X

Posted Nov 8, 2010 17:07 UTC (Mon) by marcH (subscriber, #57642) [Link]

> if X applications don't care about working remotely, then they *DONT*, because they do too many round-trips, [...] I have tried to use X remotely a few times, but it simply isn't usable for most things other than emacs and xterm.

On a LAN this is just plain wrong.

Not so long ago people working with dumb X11 terminals on Ethernet 10M. For sure they were not playing games nor using Eclipse; that does not mean they were using only emacs and xterm.

Not later than this morning I used this over encrypted (!) X11 forwarding:
http://www.methylblue.com/filelight/
It worked like a charm, just like thousands of other X11 applications. Out of curiosity I just tried: gimp, firefox, and OpenOffice and they all work flawlessly on my LAN. I did not even had to get rid of the encryption!

The reason why X11's networking is not so much in use nowadays is just because a powerful PC is now cheaper than a dumb X11 terminal. It is NOT because every single X11 framework is badly designed and wasting thousands of round trip times and cheap CPU cycles.

For sure there are plenty of cases where X11 networking is irrelevant. There are ALSO plenty of cases where it works perfectly thank you very much.

LPC: Life after X

Posted Nov 8, 2010 17:41 UTC (Mon) by droundy (subscriber, #4559) [Link]

The problem is that applications (and toolkits, antialiasing, etc) have bloated the number of round-trips required. In my (admittedly anecdotal) experience, modern applications (including a current emacs) run *way* slower over a network than similar programs did in the 1980s. Everything has been designed and optimized based on the assumption that the X server is on the same machine as the client.

LPC: Life after X

Posted Nov 8, 2010 18:30 UTC (Mon) by drag (guest, #31333) [Link]

This is because that is always true that the client is on the same machine as the server. Within a error of about .1%.

LPC: Life after X

Posted Nov 20, 2010 1:04 UTC (Sat) by Wol (subscriber, #4433) [Link]

Actually, in my house, that's true about 50% of the time.

Soon, probably, to be 33% of the time ... :-)

(I've got a Phenom X3 as my main machine, an Athlon 1050 .75Gb ram machine I use as an x-terminal, and an Acer Aspire One that will probably soon have Smeegol on it and get used as an x-terminal too)

Cheers,
Wol

LPC: Life after X

Posted Nov 8, 2010 20:21 UTC (Mon) by foom (subscriber, #14868) [Link]

>> if X applications don't care about working remotely, then they *DONT*, because they do too many round-trips, [...] I have tried to use X remotely a few times, but it simply isn't usable for most things other than emacs and xterm.
> On a LAN this is just plain wrong.

Thanks for the creative editing.

Let me fill in the "[...]" you snipped out of my original message:
>which becomes deadly over a network with more than a few ms RTT
Hey, guess what! On a LAN, you won't have more than a few ms RTT!

Now try running those apps when you're working at home, a few miles away real-space, and 50ms away RTT.

LPC: Life after X

Posted Nov 8, 2010 21:54 UTC (Mon) by marcH (subscriber, #57642) [Link]

My bad I honestly missed that. Sorry.

Even if X11 networking is or has become unusable on the WAN it is does not really matter: it still works very well on the LAN so it can absolutely not be dismissed as if it were a forgotten thing from the past.

LPC: Life after X

Posted Nov 22, 2010 5:59 UTC (Mon) by dododge (guest, #2870) [Link]

Here's a real example from the 10MB days: early web browsers often had their their application logo at the top corner of the window, and would run a little animation loop when loading a page. I believe in Netscape's case it would send the entire logo image to the X server for every frame of animation. This worked fine on the same machine because it could use things like XSHM to send the image data out-of-band, but when running remotely it meant it was continuously and repeatedly encoding and transmitting each frame of the animation through the X protocol.

If they had designed for network transparency they presumably would instead have cached the frames as a handful of server-side pixmaps and flipped between them. It seems like a little thing, but as I recall this minor oversight made it perform terribly over a LAN.

LPC: Life after X

Posted Nov 22, 2010 8:18 UTC (Mon) by nix (subscriber, #2304) [Link]

IIRC this was only true in Netscape 4, which is when Netscape decided to give the implementation job to a gang of crack-addled monkeys rather than JWZ. (I think that was his phrase...)

Quality suffered accordingly.

LPC: Life after X

Posted Nov 22, 2010 10:56 UTC (Mon) by dododge (guest, #2870) [Link]

Yeah NS3 or NS4 sounds about the right timeframe. I mostly remember noticing that it was much slower than it should have been and having a real WTF?! moment when I traced the X connection and saw all those images flying by.

LPC: Life after X

Posted Nov 8, 2010 16:30 UTC (Mon) by scripter (subscriber, #2654) [Link]

When Keith said applications, I think he meant purpose-built devices. It's more clear in context. We can substitute as follows:

"how many [of the mobile systems, media devices, and in-vehicle embedded systems] care about [graphical] network transparency...?"

As for me, I'm not sure that I need the Android calendar app to be able to display its output on a remote display via X11 protocol or something similar. Instead, I'd load my google calendar elsewhere.

I don't need the Android music player to be able to display elsewhere, because I'd simply load my music collection elsewhere using a separate device, separate application, and separate network protocol.

LPC: Life after X

Posted Nov 9, 2010 21:57 UTC (Tue) by dlang (guest, #313) [Link]

you absolutly do want the ability to remote the display from an android device.

think about the movies and TV programs that allow a person to flick the display from a hand-held device to a wall-size screen. If that's not an example of people wanting network transparent display capibilities, even on portable devices I don't know what is.

Network Transparency

Posted Nov 8, 2010 21:57 UTC (Mon) by dskoll (subscriber, #1630) [Link]

I really like X's network transparency. In the end, I will be satisfied if network transparency can be reached on a per-application basis no matter how. (If it's just slinging bitmaps around, I can live with that as long as it's per-application and reasonably responsive.)

However, there's one big advantage of X's design. Separating out the clients from the servers meant the designers had to really think about the interface. It meant clients couldn't just scribble all over the frame buffer however they liked. And it meant the server could usually prevent disobedient or insane clients from taking the system down.

I think this discipline is one of the key factors in the stability, security and reliability of the Linux desktop, and it's something we may lose in a design that lets clients do whatever they want.

Network Transparency

Posted Nov 9, 2010 5:28 UTC (Tue) by njs (subscriber, #40338) [Link]

> Separating out the clients from the servers meant the designers had to really think about the interface. It meant clients couldn't just scribble all over the frame buffer however they liked. And it meant the server could usually prevent disobedient or insane clients from taking the system down.

Uh... X does a *really bad* job of this, actually. There are some security extensions these days whose details I'm not familiar with, but at least in classic X, any application can do pretty much anything it wants to -- scribble all over the screen, snoop all input events, lock up the server, change the contents of other app's windows...

Typo

Posted Nov 11, 2010 2:33 UTC (Thu) by dw (subscriber, #12017) [Link]

s/is going to into the/is going to go into the/

LPC: Life after X

Posted Nov 18, 2010 14:08 UTC (Thu) by pivot (guest, #588) [Link]

Wow! Seldom is the peanut gallery more active than when it comes to discussions about X11!

LPC: Life after X

Posted Dec 9, 2010 22:36 UTC (Thu) by Lestibournes (guest, #71790) [Link]

In order for X to successfully deal with Wayland it needs to do the following:
1. Reframe the discussion so that it's not X vs Wayland, but Wayland being a replacement for the layer of X that takes care of local display.
2. Provide a user experience on the local machine that is not noticably infirior to the experience of native Wayland apps.
3. Provide a superior experience for remote apps.
4. Provide an API that is at least as attractive to all developers as native Wayland.

For as long as X manages this, it will remain the standard API for GUIs. But the moment X falters in any one of this aspects, it can be completely gobbled up by Wayland.


Copyright © 2010, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds