Archive for the ‘Fundamental Tech’ Category

Introduction to Machine Learning for Developers – Algorithmia

Converting Unicode code point to surrogate representation

Although I looked up Unicode official web site, it was not possible to find how to convert Unicode code point to 4 bytes representation Emoticon uses. What’s weird about Emoticons are :

  • Unicode emoticon area is just smaller set of emoticons people think of
  • Some emoticons use Unicode code point, but some uses 4 bytes of Unicode sequence ( is there a name for this? )
  • Is there a rule to convert code point to the 4 bytes of sequence and to UTF-8?
    • There are some rule between code point and UTF-8

@gluebyte sent me a URL for a web site, which I found at work also but didn’t know there was conversion rule between code point and the 4 bytes of Unicode escape sequence. ( This is called surrogate code. So, it’s different from normal Unicode code point escape sequence. )

Actually, as for Unicode terminology, terminology is more difficult than what they mean.

4K video & Intel,d.aWw

Unofficial AirPlay Protocol

Apple spent lots of money on Taligent, OpenDoc and many others. However they lost lost of money from it.
One of the things Steve Jobs did when he came back to Apple is to axe long term research.
They didn’t have money to spend on something which didn’t make money immediately.

Probably they still push such approach on product/technology development.
Bonjour, AV Foundation for Windows, AirPlay for Windows…
Apple provides SDKs for some of them, but has not updated them for long time.
If Apple plays hardball on computer industry, they could have made Bonjour for .NET, QuickTime for .NET etc.
But they don’t.
QuickTime for Windows is still left behind. On Mac, QuickTime is deprecated. Then what about Windows?
Because they have plugins for web browsers and many movie preview sites are created with QuickTime, I believe Apple people are working on AV Foundation for Windows to keep supporting them. However, I don’t smell it as strongly as Mac OS X for Intel in the era of PowerPC.
How about AirPlay for Windows? Apple will surely have is SDK in house. That is because their iTunes for Windows has AirPlay capability. But they don’t announce it. is it because it’s not ready yet?

Well.. unlike how it was in the PC era, when they had to attract Windows developers and users, Apple now has very strong platform, iOS. Financially, mentally… in every aspect they put their focus on iOS than Mac OS X.
Also, by not announcing those, they can persuade people to buy Apple platform.
When they were weak, they had to approach Windows developers and users actively. Now, it’s reversed.

Some can say that they don’t need to support Windows. Well.. yes.. maybe it’s better to make AirPlay for Android.
Bonjour for Android. However, they don’t do that either. It’s because smartphone business is different from PC business.
On PC platform, developers choose 3rd party libraries freely. But on smartphone platform, people tend to rely on frameworks provided by the OS provider.

PC and smartphone business are very different.

But anyway.. for people who are curious about possibility of 3rd party implementation of AirPlay.. here it goes.


The advance of WebObject has been stalled…

EO Modeling.. WebObject brought very easy to use graphing mechanism between DBMS/data storage, GUI and controller among them.
However, after NeXT was merged with Apple, they didn’t push WebObject much. Actually as a product WebObject died. I’m really sorry about it. Because “Things as Objects”-movement was stalled ever since. Well.. it’s true that nowadays everything is objects. But it looks to me that people don’t push more advanced concept anymore.

CORBA looks to be dead, although GNOME on top of CORBA is still popular on Linux. But GNOME doesn’t push the virtue of CORBA in my opinion.

After a few years  of silence, actually their paradigm came back to Mac.
It’s CoreData. But it’s more about framework rather than whole set of development tools and workflow. Moreover, i don’t prefer using Core Data. There are many reason.
Other than people who only knows about Mac, people who need to handle stuffs on multiple platforms like me will care more about portability and control-power.

Anyway.. here is one video on WebObject, demonstrated by Steve Jobs himself.

But actually, that video is more about White OpenStep rather than WebObject itself.
Oh.. actually there is…

Is H/W technology is the only technology?

From my Hot Potatoes on My Hands blog.

Is H/W technology is the only technology?

Apple 틱한, 사용자의 바쁜 정도를 감지해 주는 기술?

집에 오자 마자, 폭풍 문서 찾기, 및 스캔 및 email 보내기..
한 email을 4~5개 막 보낸 것 같다. 도중에 전화 두 통..
facebook message 막 보내고..
iChat 메시지까지..

휴.. 이제 좀 한 숨 놨다.
내가 Apple iChat 팀에 있음, 이런 기능 넣으면 좋겠다.
API가 있는데, 그것을 이용하면, 현재 사용자가 뭔일로 바쁜지를 감지해 주어서, 상대들이 메시지를 보내려 할때, 마치 타이핑을 치고 있는 것을 알려주는 indicator처럼 알려준다. 그러면 내가 일일히 답변을 안해도 뭔가로 바쁘구나를 알 수있으니까..

어떻게 바쁜지를 감지하는 지는..

  1. 뭔가 타이핑을 하고 있는 것을 OS가 감지한다.
  2. 요새 대개 Mac엔 카메라가 기본 장착되어 있기 때문에, 그 iSight 카메라를 이용, 모션을 센싱해서, 모션이 뭔가 분주하면 그것을 감지한다.
  3. 혹.. iPhone을 쓰고 있으면, iPhone에도 역시 비슷한 기능을 넣고, 혹은 열까지 감지해서 서로 장비끼리 공유한다.

이 모든 결과를 모아 모아서 indication을 해 준다.

기술 자체가 Apple-tic 하잖아?
삼성처럼, 화면 크기나 늘이고, 메모리나 늘리고 그런게 아니라.

한국 사람들은 이런 기술은 기술이라고 생각하지 않겠지만…

Open Source implementation of H.265 (HEVC) : X265

ffmpeg patch

The point is that how well you understand your choice of languages, frameworks and platform

I agree with this post on maccrazy’s blog. (It’s in Korean and I don’t translate it for you. I’m sorry for that. )

I’ve worked on an iOS project of which original S/W engineers didn’t understand Objective-C and Cocoa. Also they didn’t understand memory management. They looked to be just like a freshman student ( or even worse ) in CS.
ARC is much better than garbage collection and it replaces the old release/retain/autorelease based memory management. The compiler is smart enough that it can put those automatically for ARC.

However, even though novice-but-pretending-excellent-iOS-programmer-by-passing-interview-questions-by-memorizing-stuffs-not-by-making-those-him/herself can say that they don’t need to understand memory management, every S/W engineers should understand it. Even when they write in C++…. oh, my! I’ve never thought there could be people who write code that badly when I got my first job here in the USA.

I’m sorry but there are too many people who just studied computer languages and his/her interested framework and make things work on a surface. Then people who hire S/W engineers without understanding programming and S/W engineering just tend to judge the candidates by figuring out how many buzz words or terminology they know.

One of the funniest article I read was, something like.. English major has better chance to be a manager, so it’s not necessary to study CS/E.. or something like that..

Well.. however the most fundamental thing is that they need to be out of the box.
S/W engineers who have broad capability tend not to remember some terminology if he passed the level he has to remember those like 10 years ago. The knowledge became melt into him. So, he understand those stuffs better than others. People should not overlook such case. They are really excellent S/W engineers.

How can a manager hire those people? The manager should have profound understanding also.

IBM Blue Gene/Q

IBM Blue Gene/Q supercomputer
5D Torus interconnection

Even one of the highest performance supercomputer is also built with similar architecture with PCs.
Difference is that how to interconnect nodes ( each can be a computer by itself ) for internal connectivity not “Internet” connection.
Surely the architecture design can incorporate faster internal bus and better architecture, faster I/O, high availability, fail-over feature, fail resiliency, etc.

However, that kind of super computer is very expensive. When processing unit was not as fast as current processing unit, they have to design the fastest architecture. However, as you can see from Blue Gene/X diagram, you can practically build cost effective cluster supercomputers or a group of computers.
Surely the maximum performance can be slower than a supercomputer which is designed for maximum performance from the scratch, but because processing unit is fast nowadays, it may not need to require the fastest performance always. If it requires long processing time, distribute the data to some group of computers or  the whole group of computers and let them process it at the same time and collect the result over “TCP/IP” can be good enough.
(This works when required processing time is a lot longer than networking speed. ) Analyzing big data, processing tons of video, or rendering photo-realistic 3D animations can be those examples.
So, cluster solutions like BeoWolf is cost effective choice for choice.

So, it means that there can be some alternative ways to achieve good enough  performance with lower price requirement for certain specific situation.
Even S/W design/architecture affects it and can compromise requirement for low cost but high performance turn-around for submitted tasks.
Instead of working on original uncompressed video data, for example, directly, it can present smaller version of video data, which can be loaded and edited quickly by users. Then any steps of editing like what effect to use, from where to where to cut out, etc can be applied to the original big-sized video data.
It can at least reduce the time to load, edit and play for confirmation. if we consider actual processing time is not only governed by computing power but also human interaction and any job involving operators, working on smaller version of data which represent its big original counterpart can actually overcome low internetwork performance. ( depending on situation )

So, when designing a high performance system and solution, the person should also consider S/W system architecture of the solution as well as H/W architecture. If one of them is omitted, it can be either that the wanted system can’t be designed or that you need to pay a lot to buy a dream machine.

%d bloggers like this: