Archive for the ‘HCI’ Category

User Interface in our life

Bad example of user interface in real life

Today, it took 4 hours to go back home. The traffic on 101, 134 and 5 was really bad. Because it took so long, I badly wanted to go to a restroom. One place I know when I was on 134 was the Glendale Galleria. So, I existed finally to Glendale.

I visited the Glendale Galleria several times before, but still can’t locate restrooms there easily. Although it was huge and long, only in the middle section of the building, there are restrooms.

Once you enter its door to the Galleria mall, you can’t locate your location on directories there. The “Here you are” sticker or mark should be painted with different hue to stand out. However, it was not really so on the directory. (c.f. the pictures below.) Actually stickers for Pepsi cola was more noticeable than “where you are” sticker when you see it there.


Also, it was very hard to figure out on which floor you were on.

Also, in this public space, usually restrooms should be easily accessible and noticeable. However, it was not.
As you can see from the last picture of the mall, although there is one on the left side, you can’t see any sign for restroom easily.

It turned out that it was write in small font with golden plate.

If you go to S.Korea, the directories are usually drawn like it reflects the orientation of the building or mall from the location you stand at.
So, if you look at a directory, actual shops and others are just located like the directory. So, all directories are drawn differently based on where it is going to be installed. Therefore, where you are standing at that moment is usually its bottom-center. And it’s marked with bright colors of which hue is very noticeable than those colors for other object on the directory. Also, “You Are Here” is written in a big font.

But here in the US, I usually didn’t find any shopping  mall or public place which had directories drawn like that.

User interface is not only needed for S/W programs. UI is not for making S/W look pretty. It’s very practical and very strongly goal-oriented methodology for guiding people. Around us, everything should have their own UI, e.g. spigots, door handles, and so on.

It’s very frustrating..


Functional GUI design

Why I like Apple’s products, especially S/W products, are because they understand the Bauhaus design philosophy, “Form follows function.” In other words, design is not just for look. It should be something to help to use the target objects or products.

Here is a good example, ( Nowadays even Windows does this and Linux, for sure, has been done this. )

A Transparent window can overcome shortcoming of a small screen

I have a 13″ MacBook. Because its screen is small, I have to switch document window and a document window on which I type often. Such switching distracts my attention.
However, because the Terminal window is transparent ( you can set how much transparent it should be ), I can see what is behind of that window. It is very easy to confirm the content on the PDF file without switch back and forth. Transparent window is not just an eye candy. It is very functional.

I love this kind of touch. New Apple users who have entered to Apple world tend to think that Apple’s products are just pretty. It is wrong. It is not just for visual purpose. They are functional. Apple people have been quite good about this kind of deep consideration.

However, I am disappointed by their new tool for programmers. It is quite anti-productivity and big step-backward from its previous version. It looks like that it is designed by someone who are totally different. I wish Apple bring the functional design back to the tool.

One big issue in machine translation

In HCI field, there are two inherently difficult issues, which I’m aware of.
One is translation of human language and another one is voice recognition. ( Similarly handwriting recognition is also a big problem and there has been no big advancement in those fields. )

Today, I tried the Google Translate to see how much it was enhanced.
Recently Google updated the interface of their Translate web site. Now, you can upload a document, and it automatically translate it for you.
Here is its UI.

So, unlike its previous version, you are supposed to paste an URL of a web page to translate there, but pressing “Upload” button which is not in the picture above.

Then, translation result is like this.

Translation Results

Actually, the screenshot was taken after I tried once and changed the translated title from “펜촉 객체를 위한 메모리 관리”, i.e. “Memory Management for a Pen-tip”.
Then I deleted the “uploaded” document ( strangely, even though you don’t upload a file and just pasted a web URL, it calls that as “uploaded” ) and pasted the URL again. Intelligently it remembers the text I changed before.

Anyway, what is funny is :

  • Why it translated the Nib Objects as “Pen-tip” objects in Korean
  • The choice of “콘센트”, or concent, for Outlets

I believe the first one is due to its vocabulary. Somehow Nib is mapped to “펜촉”, or pen-tip.
The 2nd one is rather interesting and I think this is why machine translation is very hard.
“콘센트” or concent to mean outlet is so-called Konglish. I don’t know how we, Koreans, started to use that word. Whether it is right or not, we use it. Google Translate smartly chooses that word and translated as such.
However, here is one or two big problems.

  1. Whether to write it in English characters ( Outlets) or in Korean but as pronounced (아웃렛츠 or 아웃렛 )
  2. Whether to replace it with “콘센트” or concent

The 1st one is a problem of “which one is more natural to Korean people” Do they write words from English or other languages in the original characters or Korean? What makes more complicated here is that we uses English characters for something, while we use Korean characters for others. For something like “Outlets”, which will not mean the actual outlet you use for plugging in power cables but means the specific term in Cocoa/Objective-C programming, we usually don’t write it in Korean characters. Then how can a machine figure out that context and chooses a better or more natural form?

The 2nd problem is more fundamental. It is about how to translate foreign languages. Should proper words be chosen based on how people there use in daily life? Then “콘센트” is right. But how about the context? Whether is is Outlets or 아웃렛, deciding 콘센트(concent) or 아웃렛(outlet) should be decided after figuring out the context, and in this case, Outlets is better and right choice.

How can machine figure those out? It is really difficult problem. Each and every language on Earth will have different property. Even in same Korean, I mean North Korea and South Korea, North Koreans use “얼음 보숭이”, while South Koreans use “아이스크림”, which is written in Korean as “ice cream” is pronounced.

I believe, Google decided to solve this problem by users upload their own translation. In its previous version, you could paste text or URL to a document to translate and you could suggest your own, or better translation to the Google translate system. Now, it is done similarly also, but its UI is different. But anyway it is the same approach.

Then, I believe, Google’s AI collects them and finds some common patterns with higher usage ratio and next time, if someone else ask the Google translate to translate the same text or text with similar phrases in it, Google’s AI chooses the most proper pattern in its expert-system DB. ( If it even uses Expert System. )
But I didn’t see that happen yet. I tried it couple of time for years, but Google’s system doesn’t seem to look up my translation I put there a few months ago. But I can feel that Google people will do that eventually. Without that approach, there is no reason they allowed users to provide their own translation. ( With current version, the translated document is saved in your account. But previous versions didn’t have that functionality. However, the Google system let you suggest your translation. It means that Google’s system collects your translation for future reference. )

It is very interesting to see how Google’s translate system will evolve.

사람의 뇌와 컴퓨터를 연결하기

미국에 오기 전.. 삼성에 들어가기 전.. 이런 이야기를 했었다.
CS쪽에서 이미 컴퓨터 자체에 대한 연구는 거의 다 되어, 더 이상 재미난 것은 없다.
그 응용이 있을 뿐이다. 물론 머리 좋은 사람이 언제 어디서 나타나서 패러다임을 바꿔 버릴지는 모르겠지만..
근데, 그때 외국 논문이나 기술 방향을 보고 있었던 터라, 그리고 내가 궁금해 했던 것이, 분명 신경 전달계에 흐르는 것도 전기 신호인데, 어떻게 그것을 뇌가 이해하고 처리하는지였다.
만약 이것을 조금식 이라도 이해하기 시작하면 의각이나 여러 가지 분야에서 획기적인 진전이 있을 것이라 보였다. 그 당시 Johns Hopkins에서 컴퓨터를 사람이 쳐다 보면, 거기에 따라 마우스 포인터를 움직이는 것이었다. 이것은 카메라를 이용해서 눈동자 위치를 파악해서 하는게 아니라, 눈이 움직일대, 그 신호가 뇌에 전다뢷는 것을 해석해서 하는 것이었다.
유학을 오면서 이쪽 분야를 하는 데를 그토록 찾았지만 찾을 수가 없었다. 그렇다고 의대갈 수도 없는 것이고..
그런데, 요번년도 초엔가 MIT에서 블랙 박스로, 뇌파를 분석해서 처리하는 것을 만들었다. 그러더니 USC에서도 슬금 슬금 그런 연구결과가 나오는 것이었다. 외 이런 홍보를 하지 않는가? 궁금하다..

어제 USC에서 News Letter가 왔다. 스캔해 올리겠다. (왜 Blogger는 PDF같은 파일을 올릴 수없는거야…)

%d bloggers like this: