Posts Tagged ‘Apple’

A parallel computer built with Apple II boards

AppleCrate II: A New Apple II-Based Parallel Computer


It was 1991. Lee HyunChang, a professor I met in a SIG in KETEL for non IBM-PC compatible machine users. He explained about a parallel computer he designed and built using abandoned Apple II board from Se-Woon market in Seoul. At that time, there were lots of duplicated Apple II+ boards which were dumped, because 8-bit era was ended a while ago.
He sold the Apple II+ parallel computer to a department store, as far as I know, for their own use to control a big clock.

I didn’t even think whether I could something like that here in the US! ( even though it’s on the Internet. )

memory compaction in OS X Maverick

어디선가 들은 것 같지 않나? 메모리 컴팩션.

OS 수업을 들은 사람이라면 이게 뭔지 알 것이다. 즉 메모리 fragementation으로, 실제 빈 공간의 총량은, 로딩될 프로그램보다 더 큰데, 연결된 공간의 부족으로 로딩할 수가 없는 경우에 쓰이는 메모리 공간을 한쪽으로 몰아 둠으로써, 사용할 수있는 공간을 연속적인 공간으로 확보하는 것이다.

지금 Keynote를 다시 보고 있는데,  이 compressed memory는 전혀 새로운게 아니다. 이미 System 7때 있었고, Windows에서도 heap compaction이라고 불리는 것이다 물론 구체적인 구현과 전략같은 세세한 면에선 다를 수있지만, 전체적인 것은 같다.

OS X 10.0이 나올때, “prebinding” 이란 이름의 메모리 로딩 기법이 나오면서 사라졌고, 그 이후엔 prebinding도 없어지고 자동으로 된다면서 성능이 향상되었다고 광고했다.

그럼 왜 다시 이 메모리 compaction이 나왔을가를 생각하면, 아마 4K 비디오나 기타 메모리를 많이 쓰는 소프트웨어들이 늘어나면서 예전과 같은 문제가 결과적으로 나왔기 때문일거라는 생각이다.

그럼 어떻게 실행중인 프로그램이 있는데, 그들이 쓰는 메모리를, 프로그램의 수행에 저장을 두지 않고 압축할까? (사실 압축이란 말은 틀리고, compaction, 그러니까 몰아 놓는 것이다. memory fragmentation을 없애려고) heap이란 개념을 쓸텐데, 이건 보통 stack, heap할 때 그런 뜻이 아니라, pointer to pointer를 이용하는 것이었다.
(음.. 요샌 애플이 CS/CSE에서 통상적으로 쓰는 용어를 쓰지만 Mac OS 7 뭐 이럴때는 자기네가 용어를 만드는 것을 좋아했다. 그래서 stack이니 heap이니 하는 이 바닥에서 공통적으로 이해되는 용어도, 메모리 컴팩션을 설명할때는 pointer to pointer에 의해서 위치 이동이 될 수있는 메모리 공간을, 엄밀하게는 메모리 공간을 포인팅하는 메로리 공간을 heap이라고 했다. 헷갈리게 말이지.)

마베릭에선 이것을 어떻게 처리했을까 궁금하다.

즉 그동안 memory compaction을 하지 않던게 이상한거다. 로딩 속도를 빨리하고, 메모리 공간이 그만큼 커졌기 때문에 그다지 필요하지 않고, 효율적인 메모리 사용보다는 속도가 더 중요해졌기 때문엔데, 이젠 점점 big data란 말이 유행할 정도이고, 비디오 재생은 물론 편집, 인코딩등이 PC로 활발이 되기 때문에, 메모리 사용한계를 거의 이전보다 쉽게 도달하기 때문에 다시 넣은게 아닐까 한다.

물론 그때와 비교해, 무료 약 20년 이상의 시간이 지났기 때문에, 실제 구현 기법이라던가, 좀더 효율적인 방법이 OS 를 연구하는 사람들간에 연구되었을 것으로 기대한다.

요번 WWDC에서의 발표에서 인상적인 부분은, 종합선물 세트같은 OS X의 기능 소개가 아니라, 진짜 우리 CS를 전공한 사람들이 말하는 OS, 그리고 System Software에 변화가 생겼다는 것이고, 이 말은 Apple이 이 부분을 간과하고 있지 않다는 뜻이다.
사실 Leopard 이후에, OS 자체는 그다지 변하지 않았다. 그 위에서 돌아가는 서비스가 변한거지.

내가 보기에 iOS 7이나 Mac OS X나, Mac Pro 다 인상적이었지만, 제일 고무적인 부분이 그 부분이었다.

모처럼 1990년대 초반과 비슷한 느낌을 가졌다.

Maybe Apple people consider Mac OS X/iOS hybrid apps?

The last time I looked up documentation on “How to build iPhone/iPad hybrid app” was 2 years ago roughly. (When did the 1st gen. of iPad come out?)

At that time, I think there was no “platform key” in this identifier format in info.plist.


According to a section “updating your info.plist settings” in “Creating a Universal App”, it says :

For apps that run only on iOS, you can omit the platform string. (The iphoneos platform string is used to distinguish apps written for iOS from those written for Mac OS X.)

Hmm… what does that mean? Are they preparing a unified executable file format like they did for 32/64 and Intel/PowerPC for iOS and Mac OS X?
So, you can choose whether a project is built for Mac OS X or iOS, or the both?
Surely, iOS uses ARM instruction set in Mach-O format ( I believe ) while Mac OS X uses x86/x64 instruction set in Mach-O format. then…. yeah.. it can be possible to have one bundle.

Hmm.. if they announce a Mac-iPad hybrid device, it can be interesting. I like Lenovo’s effort on this with Windows 8.


WWDC 2013

wwdc13-about-mainWWDC 2013 ticker sale starts at April, 25th, 10 AM PDT

You would wonder if it’s worthy of the fee, $1,600. ( I hate it to say $1,599 )

So, here I would like to summarize why it can be good to attend and why it can’t be.

  • Why you should attend
    • You are surrounded by Apple haters and naysayers. So, you need motivation.
      • Until iPhone/iOS was announced, Mac developers were small group of cattle. You know Mac is very exciting platform due to its elegance of frameworks and many other stuffs, but you felt desperate because people around you say negative stuffs based on false information. At that time you need strong motivation. I can’t forget how amazing experience that was. ( I was a Windows/Unix developer also. So, mine will not as great as other Mac-only developers. Let’s say you developed only for Mac, how great the feeling could be! )
      • Even nowadays, although Apple is doing quite well, stock holders who worry that Apple’s margin on iOS devices are not as good as how it was before and journalists who don’t understand the whole picture, there are still many nay sayers. Check some tech. news web site recently. They even said Tim Cook should be kicked out. The low margin can be good for customers because we as customers can buy a good products in cheaper price. Also that makes Apple’s products more competitive. How single-cell creature-like thinking of that greedy stock holders!
      • So, you may still need motivation.
    • If you can speak English without worrying about your pronunciation and know how to start conversation with foreigners, it will give you a good feeling and may be able to find an opportunity in other country than you live. It doesn’t matter you emigrate to those foreign countries or not. At least you can get fresh and live information on markets you are interested in.
    • You can go through as many sessions as you can. If you do it at home with their sliders and video, you may not able to do that. I personally didn’t even watch those video except for a few selected ones.
    • If you take a MacBooks with you, you can attend their hand-on sessions. You can learn with Apple engineers’ help in a short distance. So, to make the learning process more effective, be prepared to be familiar with those code you want to write and technology behind it. Then you can ask more directly instead of focusing writing code in hand-on sessions. While asking questions and learning stuffs, you can also chat with Apple people and suggest your idea. You never know whom you are talking with. Actually I noticed that many people asked to those Apple engineers on what they do at Apple. Then you can directly approach those people who is responsible for technology you are interested and ask very specific questions. It will be very different from asking through their email list and discussion board with some stupid Apple-lover police living in the developer discussion board and those email list.
    • Nice meal and some kind of meet-up : Well, there are people who hate meal provided by Apple. True. Chefs prepared those in large quantity. So, it may be dried and taste like plastic. However, eating with birds with same feathers gives you good feeling.
    • Sample projects which shows how to use new frameworks or even older ones. Remember this. Sometimes this sample projects are not available for downloading for people who don’t attend WWDC. Sometimes they do. For WWDC 2012, they provided a link to download sample projects used in the WWDC 2012. But for WWDC 2011, they didn’t. I don’t know if they decided to post those for good since WWDC 2012. But let’s open a possibility that they may not post again for WWDC 2013.
  • Why it’s OK not to attend
    • The fee, $1600, plus fee for hotel, and probably for renting a car cost a lot. A LOT… It kills me.
    • The speakers may speak too fast for things you want to concentrate, and too slow for things you want to skip. They also flip slide pages as such.
    • Those sliders and video are available to any members of Apple Developer Membership. So, you can sit in front of your computer and watch and read it without paying the killing price. ( plus you can go to a bathroom by clicking “stop” button. )
    • You don’t need to persuade your manager who don’t understand why their S/W engineers need to be updated with new information.
    • Actually you may not learn anything just because you attend it. People are good at English can also say so. Anyway you have to figure out how to use interested framework methods and classes in the way you want. If you are non-English speaker, except for gathering with people who speak your mother tongue, WWDC can be boring. There is no our favorite pierrot any more. ( you know who he is. )

If the attendance fee is around $150~$300, I can make up my mind easily to attend. But $1600 + is kind of high for deciding.

This time I’d like to attend it. I feel like out of gas in developing S/W nowadays. Even on Windows, I had to confront who don’t understand S/W development. Those environment let me down. Developing S/W was joyful moment so far. However, here where I work, it’s discouraging.
Attending Apple-centric conference doesn’t cheer my Apple side only. It also cheers up my whole developers side in me. It doesn’t matter if it’s Windows, Web or Unix.
So, I value those “live” feeling that we are S/W developers a lot.

How about you? Are you going to attend WWDC 2013?

What happened to Apple?

After lots of struggling at home yesterday with the sudden expiration of iOS 6.1 beta 4, now iTunes displays “Check Update” and “Restore iPhone…” buttons.

iTunes now displays "Check for Update" and "Restore iPhone..." buttons

iTunes now displays “Check for Update” and “Restore iPhone…” buttons

The iTunes didn’t display it yesterday, i.e. Jan. 27th, 2012 Pacific time.
Why it didn’t display those buttons yesterday?

Let’s point out a few things.
When a beta image was expired in previous versions ( I’m not just talking about iOS 6.x or 5.x. I have full experience with iOS since it’s beginning. ), they put some “allowance” or “cushion” days sufficiently. So, even though its following beta image was not installed, the old version was a live. If what I remember is right, in 3.x versions of iOS, I installed beta 2 and didn’t did so with beta 3 and after official public version was announced, I updated it to the public version. There was no problem in using the iPhone/iPod touch with the beta 2.
However, yesterday, my iPhone suddenly displayed “Activation Needed” message. I was mostly out of my home, so I didn’t know if new version was released on Jan. 26th, which was Saturday.
Then when I came back home, my iTunes didn’t display “Check for Update” and “Restore iPhone…” buttons. So, although I downloaded the latest beta image, I couldn’t update my phone. My iPhone just became bricked.

Who are leading Apple’s development and SQA teams nowadays? After iPhone got popular, I started to feel their quality of work degraded gradually. I understood that they lacked in their work force. I’ve heard that only finger-countable number of people worked on Xcode, while at MS about 250 people work on Visual Studio. I know that many good people in Mac OS X team wanted to move over to iPhone team. So, I expected such lower quality job. However, I believed Apple people. They will cease urgent fire and will be back to normal mode. However, it turned out they didn’t.
I file bugs to Apple’s bug report pages. Some are easily noticeable problems.
Even though some are beta, their internal SQA team should test things thoroughly and publish to developer community. The outside developers are not their SQA team. Although we test their S/W programs, the main focus is different. I’m not saying that their S/W programs should be perfect. They are also human. They can do mistake. However, I feel that their quality degraded seriously compared how they were before. Well, if we call it nicely, it’s social SQA.

I, personally, do three steps of testing of my own code.
While implementing, I frequently debug what I implemented to make sure if it works as I designed. Then when I finish implementation, I debug it to see if it works as a whole first, and do another test to see if it breaks any related features. Then I hand-over to SQA team.
So, there has been no bugs once they left my hands. I’m not saying that I’m always perfect. However, I at least try to ensure what I do. If there is only things I don’t test thoroughly although I implemented is some features of which designed behavior is not yet set by people who requested it. So, it’s kind of rough implementation to the point which can be ground for whatever they ask for the feature.
I don’t want to say some “great-sound” terminology like “Test-Driven Development”
Even we don’t use such term, that is common sense.

(Strangely, since 2000, people in this field just invent some nice-terminology to mean the same old thing. It looks to me that they try to impress other business background people or some S/W programmers who don’t have background in CS in a way that they know a lot or they are professionals. However, you know what? Although that can help office politics or impress during hiring process, actually those people who make things work are those who have those knowledge melted into their habit, so can’t even spend their time to learn those terminologies.)

Let’s look at what Apple announces. Xcode, Mac OS X, iOS.. they contain lots of bugs which are very easily visible.
Where are those Apple’s unique integrity, and perfectionism?
It’s not Steve Jobs they lost. It’s those integrity and perfectionism.

To Apple

To Apple : Maybe Mac OS X is not important any more to you. But don’t forget that there are lots of people who depend on that for serious work and use their Macs as daily life.

I found lots of serious issues with iCloud sync in Mail. Peculiar behavior in synching mailboxes, messages not being deleted, deleted mail is displayed on other macs although its view setting is set to hide them, and so on.

Mac OS X is not a toy. Don’t regard users as a testing marmot. Nowadays, I’m pretty sure that you Apple people don’t test your S/W products as well as you guys did it before. Very apparent bugs in the OS and Xcode are not fixed and just distributed.

If you do this more, I’m pretty sure that you will lose the market again. Will the iOS OK? No way. as you may know, I reported lots of problems in iOS 6 and its apps already to you guys.

Although Android dev environment is still bad compared to Xcode, but Xcode has its own dirtiness. With Jelly Beans of Android, they caught up you guys a lot. Also, ARM chips contains Jazelle, a Java booster. So, overall sluggish performance will be enhanced drastically. So, beware of that!

GNUStep on Linux starts to look attractive to me more than ever. You should be glad that GNUStep doesn’t have big impact in developer community and user community. However, as Linux stood up suddenly, desktop Linux can do that someday, although they have failed for many years.

I don’t think your time is left much, Apple.

Interesting class newly added in the Mac OS X 10.7 Lion

As we know, in current frameworks for Mac OS X and iOS, lots of new mechanisms and classes have been added to utilize GCD or any other modern mechanism to maximize the use of CPU cores.

While I was following a tutorial for iCloud on iOS, I noticed a new class, or interface in Obj-C terminology, NSFileCoordinator.

Although we can distribute chores to multiple cores and CPUs using GCD, I wondered how about handling of files? And.. yes.. this NSFileCoordinator looks to be an interface for that.

Here is an explanation for the class.

The NSFileCoordinator class coordinates the reading and writing of files and directories among multiple processes and objects in the same process. You use instances of this class as-is to read from, write to, modify the attributes of, change the location of, or delete a file or directory. Before your code to perform those actions executes, though, the file coordinator lets registered file presenter objects perform any tasks that they might require to ensure their own integrity. For example, if you want to change the location of a file, other objects interested in that file need to know where you intend to move it so that they can update their references.

Availability : Mac OS X 10.7 Lion +, iOS 5.0+

How much complete a framework can be : MS vs. Apple

As a person who major in CS, I have interest in fundamental technology or techniques like compilers, OS, 3D graphics, computer vision, parallel systems & distributed systems. So, one of the reason I started to work with 3 major OSes, i.e. Windows, Unix and Mac, and have had lots of interest in their architectural design. Therefore I like to work with different frameworks and compare their design or philosophy behind them.

I think OS architecture is like a whole picture, while frameworks are each part of the OSes. What architectural design an OS has is reflected to its frameworks. When overall architecture design affect how they work, how their components are related one another and so on, frameworks for them should be designed as such. This is one of the reason I think Apple should have filed with OS architectures than their visual UI design against MS and Google.

Here, I would like to show the very difference between .NET and Cocoa. If you have followed ToolBox API and Win32/MFC, you will see similarity between Apple’s and MS’s. Also, there are similarity between Cocoa and .NET. I think it is because Objective-C/Cocoa is in the SmallTalk camp which test OOP concept in academia and geared toward SE, while C++ is in Simula camp which is more for real industry field. Because C++ is for more real field, it should have been compromised with many restriction like CPU speed etc. Also the C++ has been evolved for everything for all people. So, although C++ was introduced as THE OOP language, it has many different concept like meta programming, ad-hoc polymorphism. Also, because of that it also contains its syntax which became somewhat bad and some concept got to have many semantics. (e.g. extern vs. static ) C# is not too different from C++ in that. Although MS streamlined C# from C++. (actually C# was influenced more by Java at first. But as its name suggests it has the face of C++. # in its name is somewhat amusing. When they call it, don’t forget that it is called “sharp” rather than “pound”. # in musical notation is half higher than a note it is attached too. So, C# is C++ in that sense. ) I can feel those philosophy to be all for all from C#.
However, Objective-C and Cocoa was evolved to be focused to increase productivity. So, when using Objective-C and Cocoa, you naturally become to focus on core logic rather than “how to achieve that”. Objective-C is more simple and Cocoa is designed to be very intuitive and powerful.

OK. Let’s see how Objective-C/Cocoa achieves “GUI code on main thread”.

- (IBAction)doInThread:(id)sender
    [NSThread detachNewThreadSelector:@selector(doSomethingMethod:)
                             toTarget:self withObject:nil];

// Thread method
- (void)doSomethingMethod:(id)object
    // Calls appendLogMessage: on main thread
    [self performSelector:@selector(appendLogMessage:)
                 onThread:[NSThread mainThread]
               withObject:@"Do something..."

- (void)appendLogMessage:(NSString *)messageString
    NSTextStorage *textStorage = [m_logBoxView textStorage];
    NSUInteger theLastPos = [textStorage length];

    [m_logBoxView setSelectedRange:NSMakeRange( theLastPos, 0 )];

    [m_logBoxView insertText:[NSString stringWithFormat:@"%@\n",

It’s very intuitive. (Don’t confuse with that you don’t understand Obj-C code. Once you get used to it, it is very easy and straightforward. ) The appendLogMessage: is written just like a normal message/method.

Now, let’s check how it looks in C#/.NET.

#region MSDN way
private void m_doSomethingButton_Click(object sender,EventArgs e)
    m_choreThread = new Thread(new ThreadStart(this.StartThreadLoop));

    m_choreThread.Name = "MSDN Thread";

// Thread method
private void StartThreadLoop()
    // This is a thread loop
    int i = 0;

    while (i < 10)
        // 1. Call WriteLogMessage()
        this.WriteLogMessage(String.Format("Do chores hard!!! {0} from {1}", i, m_choreThread.Name));



// 2. Visit this in the non-main thread space
// 5. Visit this in the main thread space
private void WriteLogMessage(string msgString)
    // 3. If it is still in the context of a non-main thread
    if (m_logTextBox.InvokeRequired)
        WriteLogMessageCallback writeLogMessageDelegate = new WriteLogMessageCallback(WriteLogMessage);

        // 4. Invoke in its main thread space
        this.Invoke(writeLogMessageDelegate, new object[] { msgString });
        // 6. Call this in the main thread space.
        m_logTextBox.AppendText(msgString + Environment.NewLine);

I commented how things are visited in the C#/.NET case to help your understanding. So, to you, who use the framework, .NET case prevent you from concentrating on the main logic. In other words, you should know how things work in C#/.NET.

One of the virtue of OOP is data hiding and encapsulation. Although it says “Data Hiding”, it’s not only about “Data” but also “how it works internally.” C#/.NET fails in this area seriously as you can see. In other words, you should be aware of in which thread context the WriteLogMessage() is called and the method should be implemented with that knowledge.
Compared to that, messages in Objective-C/Cocoa can be written just like usual messages.

Then, are they really different? Couldn’t MS make their .NET framework as elegant as Cocoa? No. I don’t think so. It looks to me that Cocoa’s underlying code could be similar to that of .NET. The difference is that how much Apple people refined their framework by considering people who actually use their framework and design the framework to let people to focus on their own logic not on how Apple implemented this and that.

Then, let’s try to make it look similar to that of Objective-C/Cocoa.

#region Cocoa Way
private void m_doSomethinginCocoaWayButton_Click(object sender, EventArgs e)
    WriteLogMessage2(Environment.NewLine + "----------------------------" + Environment.NewLine);
    m_choreThread = new Thread(new ThreadStart(this.StartThreadLoop2));
    m_choreThread.Name = "Cocoa Thread";

private void StartThreadLoop2()
    // This is a thread loop
    int i = 0;

    WriteLogMessageCallback writeLogMessageDelegate = new WriteLogMessageCallback(WriteLogMessage2);

    while (i < 10)
                                 new object[] { String.Format("Do chores hard!!! {0} from {1}", i, m_choreThread.Name) } );



private void performOnMainThread( WriteLogMessageCallback methodToInvoke, object[] parameter )
    if (m_logTextBox.InvokeRequired)
        this.Invoke(methodToInvoke, parameter);

private void WriteLogMessage2(string msgString)
    m_logTextBox.AppendText(msgString + Environment.NewLine);


Don’t bother with the method name “performOnMainThread“. It’s not our interest. Whatever the name is, it doesn’t matter. What we need to focus on here is the pattern of how WriteLogMessage2() is called.
Also, additionally assume that Object class in .NET framework, which all classes are inherited from, contains the performOnMainThread(). Then like Cocoa, you can ask an object to invoke a given method on main thread. Then you, as a application program developer, can write the method to be invoked without worrying about knowing how those invoke system works.

Similar things happens with timer invocation. If you want to invoke a message at certain intervals in Cocoa, the code is very straightforward and very clean. However, in Win32/MFC case, the OnTimer() handler should check what timer triggers this timer event. In this case, it looks to me that Apple people utilizes signals and events. (Signal is so-called H/W events while event is called S/W event, if you want to be specific about the terms. ) I will not show sample code for that here.

Point here is that the level of refinement is higher on Apple’s. Cocoa framework is designed for SE more in mind, while MFC/.NET are more about “Let’s put this on the market as quickly as possible.” Good news is that.NET is more similar to Cocoa than their previous frameworks, because NeXT showed a future a while ago and MS looked to adopted that.

I always wonder why MS simply adopted Objective-C/Cocoa for their platform development. Well, C#.NET was a kind of a gun toward Java and MS languages can share the same .NET framework. So, there is slight difference, but even with Objective-C/Cocoa, there is a language bind. Well.. it’s true that people should make their own Cocoa equivalent.. yeah.. that can be the major issue to MS.
( Well.. I know that not choosing Obj-C/Cocoa or OpenStep by MS was also due to market situation, marketing, etc, but here I would like to focus on technical side. )

I wonder how F-Script people and Ruby Cocoa people bridged their language to Cocoa. (Script Bridging tech.. )

What does it look like if HiDPI mode is enabled when your monitors doesn’t have high DPI.

Before starting to talk about this, I would like to point out DPI here actually means PPI. Some people who are not familiar with mentioning “DPI” wrongly for conventional reason would say, “Well, I read your previous blog post that DPI doesn’t matter image quality, but why does this HiDPI mode affect?”

So, even technical people at Apple and MS do use “DPI” to mean “PPI”.
So, please know the history and the audience. :)

Apple call this HiDPI mode “Resolution Independence” and use the term “HiDPI” as technical background term for “Resolution Independence”. MS uses a term, “High DPI-awareness”.

I have two monitors. One is 24″ with 1920×1080 and the other one is 22″ with 1920×1080. So, the previous one has 91.79 PPI, while the latter one has  100.13 PPI according to this online DPI/PPI calculator. They are all considered normal monitors. However, High DPI monitors have higher DPIs like 200+ DPIs.
Until we have lower priced high DPI monitors, we have to handle normal DPI and high DPI and you can see how they look like with your current non-high DPI monitors.

24″ 1920×1080 (Left) vs. 22″ 1920×1080(Right)
The screen border where they meet each other is in the middle marked with red dots

(The image is in 144 PPI approx. GIMP says that it is in 143.99 DPI. So, although the screenshot is in 1920×1080 resolution, it will be smaller than the dimension of your typical monitor which is around 100 PPI like mine. So, OK. This is a good counter example for my previous post about some other people’s saying of “DPI doesn’t matter image quality”. He blindly included PPI also into the topic on DPI. My point is that we should be able to differentiate when people say “DPI”, he/she actually mean PPI or DPI. The other blog I linked in my previous post is strictly for DPI and the author knows the difference between PPI and DPI. However, casually speaking people who have background in computers rather than desktop publishing doesn’t differentiate DPI and PPI. By saying “DPI” they usually mean “PPI”. Even in the GIMP, although it is written as 143.99 DPI, it is actually PPI. You can figure out because the image is drawn smaller than the dimension of your monitor ( which has the same dimension as the image ) due to the higher PPI. )

One very important thing to mention here is that those two monitors are all normal-DPI monitor. ( I mean PPI. :) Are you confused? I just use DPI because it is the term Apple and MS people use in their documentation. )
You may raise a question, “Why turning on the HiDPI mode makes whatever drawn on the screen bigger? ( I intentionally put the browser window across the two monitors to show you that. )
Enabling HiDPI mode using QuartzDebug let OS X to pick 2x images on a normal monitor or probably scale up. In real HiDPI mode, which you use if you have a New MacBook Pro with Retina Display, although the OS picks 2x images ( or scale up ), it will draw the 2x image ( not dimension, but size. By dimension I mean how many pixels in x and y axis, and by size I mean how many inches or centimeters in x and y axis ). Because the monitor has higher DPI ( approximately 2x DPI ), the size of 2x image in x and y will be the same to that of 1x image on normal DPI monitor.
Because my monitor is actually a normal DPI monitor, enabling HiDPI makes it look doubled in one axis. ( area-wise it is 4 times, because 2 times in x axis by 2 times in y axis is 4 times of the area. )

How to enable HiDPI mode using Quartz Debug

At first, this can be somewhat confusing. If it is to simulate the high DPI monitor mode on a normal DPI monitor, shouldn’t ti shrink  it? Then drawing 1x image on that mode makes it look smaller and drawing 2x image makes look normal.

Well… yes.. if I made this “virtual HiDPI” mode, I would implement it like that.
So, “Enabling HiDPI display modes” is not about simulating the HiDPI mode on normal DPI monitor.
It is to let the OS to pick 2x images (or scale up 1x images if the OS does ) on normal DPI monitor.

Confusing? It is about which one to make “fixed”? the pixel density, or the size?

Actually when Apple introduced this Resolution Independency at WWDC a while ago in Snow Leopard ( was it Leopard? ), their explanation was also somewhat misleading or up-side down.
Although it can be easily explained like “If you stretch an icon to make it bigger, look! there is no jaggy effect! It is nicely scaled up like vector images! If there are 2x, 3x, 4x raster images, the OS picks them up, or if not it will scale up  existing images!”. If we think of it by fixing DPI, yes, it is to draw 2x, 3x, 4x icons or images by scaling up nicely or picking up existing such images. However, although the foundation of this technology is like that, actually it is not to draw bigger images more nicely. It is to apply bigger images on a higher-DPIed monitors to make those images look same size to the size on normal DPI monitors. In other words, scaling up images is compensated by denser dots.

This is one of the reason I don’t thing High-DPI-awareness or Resolution Independency cool. ( I don’t say it’s bad. Actually I think it is very good feature.)  In the era of Apple II, IBM-PC compatibles with CGA, EGA, VGA, Super-VGA, we have thought higher resolution is better. There were bigger monitors but also there were 14″ or 15″ inch monitors with higher resolution. We even had multisync monitors which supported multiple resolutions in a single monitor. In the generation of VGA, we started to have display monitors which can display photos, although the pixel was still big and made photos on it look a little mosaic-like. Compared to previous generation of video cards and computers, it was “photos”! To see the effect of this, MSX computers can show the effect very well, because it had 320xblah blah mode supporting a lot of colors and higher resolution with a little lesser colors. We have monitored how “pictures” became “photos”. Since Super VGA, images on monitors became real photos. If you used Amiga, you’re lucky. You had this future-feature earlier than IBM-PC compatible users.
However, still displaying text in graphics mode was not too good. After that period of time, technology in building better monitors evolved a lot. You didn’t see rounded distortion on edges of CRT monitors anymore. You had Sony Trinitron monitors and other competing technologies. But to display fonts with great details, the resolution and “DPI” was not high enough. So, every year, monitor makers introduced monitors with higher resolution and made characters and images drawn on screen bigger. To young people with good vision, it was still OK.
Now, more and more people like me started to magnify what is drawn on screen by setting lower resolution. For example, 12 point font on old computer displayed that font enough big. Compare that with a monitor which I have. I set 13 or 14 point to get the similar size of 12 point with old resolution/monitors.
However, 12 point is a kind of standard in printing business. We don’t want to see smaller and smaller text and images. Although having higher DPI and resolution on monitors with same dimension gives better quality of images and thus helps graphics designers and photographers produce better results, it counteracts for normal computing.
Also, virtually all of current monitors are built with LCD panels not CRTs. LCD has its own native resolution. So, although it is possible to change resolution, setting lower resolution on LCD monitors doesn’t give good result. They look like mosaic.  Then it is better to maintain “dimension” while increasing resolutions. Benefit is that you have images or text with greater details. If you just scale up existing images, it will not. Also still the edges can look blurry a little. However, programmers will prepare 2x images for raster graphics or make their drawing code with vector codes like NSBezier etc. Then buttons or any GUI widgets drawn like that will look great.
They can even introduce more details in 2x images. For example, in 1x images of cars, they might omit “handles” on doors. However with the 2x images, they can draw the additional handles, and because higher-DPIed monitors can present that small details, you will be able to see them. Antialiasing can look great too. Because there are more pixels, antialiasing can look more smooth.

However, let me tell you why I am not really delighted by this.
Monitor manufacturers make higher DPIed monitors…. to sell.. Anyway, you will have 2k and 4k monitors / TVs at home soon. However, your house is not big enough to hang monitors which is in the size of those silverscreens installed in theaters. They will still remain in 20″, 30″ and 50″ range largely. It means PPI increases.
Current normal DPI monitors already provides very good resolution. Fonts drawn on that, images drawn on that looks great. I’m not saying that image quality on higher-DPI monitors is not good. They are better. However, current normal DPI monitors are not in the state of the old CGA and EGA monitors. For the most of people there is no practical reason to change to those “Retina Display”. (Eventually they will be forced to, because manufacturers will stop building normal DPI LCD panels. )
What I don’t like is this part.

Look at the current MacBook Pro with Retina Display. Look at its price. Although it will be cheaper in coming generations, it is more expensive. If you write iPad apps with Retina Display, well, buying MBP with Retina Dsiplay can make sense, because you can test those 2x images in the simulator more easily without filling up your screen fully. Well, if Apple provides “minifier” for people who have normal MBPs or the iPad Retina simulator can be 1/2 sized at will, we can keep using normal MBPs, but Apple will not do that. Because they want to sell newer models.

Ambiguous Framework by Apple, after 1 year of my absence practically…

While buried in a C# project for Windows tablets, I booted into Mac OS X at work. So many things were changed so far. Apple people classified AppKit into many “layers”.

Among them, Core Media framework is the most confusing one. It reminds me of a framework for some media. But.. what kind of media can it be? Image, Text, Audio, Video? There are already frameworks for them. Then where the Core “Media” fits into?
It turned out that data types used in those media frameworks as well as AV Foundation are defined in the Core Media.

Then, shouldn’t it be better to rename it like Core Foundation for Media?
Core Foundation usually contains base data structures for Foundation and AppKit, although it has more than than. Or how about Core Types for Media? or CoreMediaType? Wouldn’t they be less confusing and abides by Apple’s convention more?

I’m already sick with MS’s nomenclature, which doesn’t reveal itself very well. Why does Apple add one more there? Arrrr…

%d bloggers like this: