Before starting to talk about this, I would like to point out DPI here actually means PPI. Some people who are not familiar with mentioning “DPI” wrongly for conventional reason would say, “Well, I read your previous blog post that DPI doesn’t matter image quality, but why does this HiDPI mode affect?”
So, even technical people at Apple and MS do use “DPI” to mean “PPI”.
So, please know the history and the audience. :)
Apple call this HiDPI mode “Resolution Independence” and use the term “HiDPI” as technical background term for “Resolution Independence”. MS uses a term, “High DPI-awareness”.
I have two monitors. One is 24″ with 1920×1080 and the other one is 22″ with 1920×1080. So, the previous one has 91.79 PPI, while the latter one has 100.13 PPI according to this online DPI/PPI calculator. They are all considered normal monitors. However, High DPI monitors have higher DPIs like 200+ DPIs.
Until we have lower priced high DPI monitors, we have to handle normal DPI and high DPI and you can see how they look like with your current non-high DPI monitors.
(The image is in 144 PPI approx. GIMP says that it is in 143.99 DPI. So, although the screenshot is in 1920×1080 resolution, it will be smaller than the dimension of your typical monitor which is around 100 PPI like mine. So, OK. This is a good counter example for my previous post about some other people’s saying of “DPI doesn’t matter image quality”. He blindly included PPI also into the topic on DPI. My point is that we should be able to differentiate when people say “DPI”, he/she actually mean PPI or DPI. The other blog I linked in my previous post is strictly for DPI and the author knows the difference between PPI and DPI. However, casually speaking people who have background in computers rather than desktop publishing doesn’t differentiate DPI and PPI. By saying “DPI” they usually mean “PPI”. Even in the GIMP, although it is written as 143.99 DPI, it is actually PPI. You can figure out because the image is drawn smaller than the dimension of your monitor ( which has the same dimension as the image ) due to the higher PPI. )
One very important thing to mention here is that those two monitors are all normal-DPI monitor. ( I mean PPI. :) Are you confused? I just use DPI because it is the term Apple and MS people use in their documentation. )
You may raise a question, “Why turning on the HiDPI mode makes whatever drawn on the screen bigger? ( I intentionally put the browser window across the two monitors to show you that. )
Enabling HiDPI mode using QuartzDebug let OS X to pick 2x images on a normal monitor or probably scale up. In real HiDPI mode, which you use if you have a New MacBook Pro with Retina Display, although the OS picks 2x images ( or scale up ), it will draw the 2x image ( not dimension, but size. By dimension I mean how many pixels in x and y axis, and by size I mean how many inches or centimeters in x and y axis ). Because the monitor has higher DPI ( approximately 2x DPI ), the size of 2x image in x and y will be the same to that of 1x image on normal DPI monitor.
Because my monitor is actually a normal DPI monitor, enabling HiDPI makes it look doubled in one axis. ( area-wise it is 4 times, because 2 times in x axis by 2 times in y axis is 4 times of the area. )
At first, this can be somewhat confusing. If it is to simulate the high DPI monitor mode on a normal DPI monitor, shouldn’t ti shrink it? Then drawing 1x image on that mode makes it look smaller and drawing 2x image makes look normal.
Well… yes.. if I made this “virtual HiDPI” mode, I would implement it like that.
So, “Enabling HiDPI display modes” is not about simulating the HiDPI mode on normal DPI monitor.
It is to let the OS to pick 2x images (or scale up 1x images if the OS does ) on normal DPI monitor.
Confusing? It is about which one to make “fixed”? the pixel density, or the size?
Actually when Apple introduced this Resolution Independency at WWDC a while ago in Snow Leopard ( was it Leopard? ), their explanation was also somewhat misleading or up-side down.
Although it can be easily explained like “If you stretch an icon to make it bigger, look! there is no jaggy effect! It is nicely scaled up like vector images! If there are 2x, 3x, 4x raster images, the OS picks them up, or if not it will scale up existing images!”. If we think of it by fixing DPI, yes, it is to draw 2x, 3x, 4x icons or images by scaling up nicely or picking up existing such images. However, although the foundation of this technology is like that, actually it is not to draw bigger images more nicely. It is to apply bigger images on a higher-DPIed monitors to make those images look same size to the size on normal DPI monitors. In other words, scaling up images is compensated by denser dots.
This is one of the reason I don’t thing High-DPI-awareness or Resolution Independency cool. ( I don’t say it’s bad. Actually I think it is very good feature.) In the era of Apple II, IBM-PC compatibles with CGA, EGA, VGA, Super-VGA, we have thought higher resolution is better. There were bigger monitors but also there were 14″ or 15″ inch monitors with higher resolution. We even had multisync monitors which supported multiple resolutions in a single monitor. In the generation of VGA, we started to have display monitors which can display photos, although the pixel was still big and made photos on it look a little mosaic-like. Compared to previous generation of video cards and computers, it was “photos”! To see the effect of this, MSX computers can show the effect very well, because it had 320xblah blah mode supporting a lot of colors and higher resolution with a little lesser colors. We have monitored how “pictures” became “photos”. Since Super VGA, images on monitors became real photos. If you used Amiga, you’re lucky. You had this future-feature earlier than IBM-PC compatible users.
However, still displaying text in graphics mode was not too good. After that period of time, technology in building better monitors evolved a lot. You didn’t see rounded distortion on edges of CRT monitors anymore. You had Sony Trinitron monitors and other competing technologies. But to display fonts with great details, the resolution and “DPI” was not high enough. So, every year, monitor makers introduced monitors with higher resolution and made characters and images drawn on screen bigger. To young people with good vision, it was still OK.
Now, more and more people like me started to magnify what is drawn on screen by setting lower resolution. For example, 12 point font on old computer displayed that font enough big. Compare that with a monitor which I have. I set 13 or 14 point to get the similar size of 12 point with old resolution/monitors.
However, 12 point is a kind of standard in printing business. We don’t want to see smaller and smaller text and images. Although having higher DPI and resolution on monitors with same dimension gives better quality of images and thus helps graphics designers and photographers produce better results, it counteracts for normal computing.
Also, virtually all of current monitors are built with LCD panels not CRTs. LCD has its own native resolution. So, although it is possible to change resolution, setting lower resolution on LCD monitors doesn’t give good result. They look like mosaic. Then it is better to maintain “dimension” while increasing resolutions. Benefit is that you have images or text with greater details. If you just scale up existing images, it will not. Also still the edges can look blurry a little. However, programmers will prepare 2x images for raster graphics or make their drawing code with vector codes like NSBezier etc. Then buttons or any GUI widgets drawn like that will look great.
They can even introduce more details in 2x images. For example, in 1x images of cars, they might omit “handles” on doors. However with the 2x images, they can draw the additional handles, and because higher-DPIed monitors can present that small details, you will be able to see them. Antialiasing can look great too. Because there are more pixels, antialiasing can look more smooth.
However, let me tell you why I am not really delighted by this.
Monitor manufacturers make higher DPIed monitors…. to sell.. Anyway, you will have 2k and 4k monitors / TVs at home soon. However, your house is not big enough to hang monitors which is in the size of those silverscreens installed in theaters. They will still remain in 20″, 30″ and 50″ range largely. It means PPI increases.
Current normal DPI monitors already provides very good resolution. Fonts drawn on that, images drawn on that looks great. I’m not saying that image quality on higher-DPI monitors is not good. They are better. However, current normal DPI monitors are not in the state of the old CGA and EGA monitors. For the most of people there is no practical reason to change to those “Retina Display”. (Eventually they will be forced to, because manufacturers will stop building normal DPI LCD panels. )
What I don’t like is this part.
Look at the current MacBook Pro with Retina Display. Look at its price. Although it will be cheaper in coming generations, it is more expensive. If you write iPad apps with Retina Display, well, buying MBP with Retina Dsiplay can make sense, because you can test those 2x images in the simulator more easily without filling up your screen fully. Well, if Apple provides “minifier” for people who have normal MBPs or the iPad Retina simulator can be 1/2 sized at will, we can keep using normal MBPs, but Apple will not do that. Because they want to sell newer models.