Posts Tagged ‘Fundamental Tech’

The advance of WebObject has been stalled…

EO Modeling.. WebObject brought very easy to use graphing mechanism between DBMS/data storage, GUI and controller among them.
However, after NeXT was merged with Apple, they didn’t push WebObject much. Actually as a product WebObject died. I’m really sorry about it. Because “Things as Objects”-movement was stalled ever since. Well.. it’s true that nowadays everything is objects. But it looks to me that people don’t push more advanced concept anymore.

CORBA looks to be dead, although GNOME on top of CORBA is still popular on Linux. But GNOME doesn’t push the virtue of CORBA in my opinion.

After a few years  of silence, actually their paradigm came back to Mac.
It’s CoreData. But it’s more about framework rather than whole set of development tools and workflow. Moreover, i don’t prefer using Core Data. There are many reason.
Other than people who only knows about Mac, people who need to handle stuffs on multiple platforms like me will care more about portability and control-power.

Anyway.. here is one video on WebObject, demonstrated by Steve Jobs himself.

But actually, that video is more about White OpenStep rather than WebObject itself.
Oh.. actually there is…


Apple 틱한, 사용자의 바쁜 정도를 감지해 주는 기술?

집에 오자 마자, 폭풍 문서 찾기, 및 스캔 및 email 보내기..
한 email을 4~5개 막 보낸 것 같다. 도중에 전화 두 통..
facebook message 막 보내고..
iChat 메시지까지..

휴.. 이제 좀 한 숨 놨다.
내가 Apple iChat 팀에 있음, 이런 기능 넣으면 좋겠다.
API가 있는데, 그것을 이용하면, 현재 사용자가 뭔일로 바쁜지를 감지해 주어서, 상대들이 메시지를 보내려 할때, 마치 타이핑을 치고 있는 것을 알려주는 indicator처럼 알려준다. 그러면 내가 일일히 답변을 안해도 뭔가로 바쁘구나를 알 수있으니까..

어떻게 바쁜지를 감지하는 지는..

  1. 뭔가 타이핑을 하고 있는 것을 OS가 감지한다.
  2. 요새 대개 Mac엔 카메라가 기본 장착되어 있기 때문에, 그 iSight 카메라를 이용, 모션을 센싱해서, 모션이 뭔가 분주하면 그것을 감지한다.
  3. 혹.. iPhone을 쓰고 있으면, iPhone에도 역시 비슷한 기능을 넣고, 혹은 열까지 감지해서 서로 장비끼리 공유한다.

이 모든 결과를 모아 모아서 indication을 해 준다.

기술 자체가 Apple-tic 하잖아?
삼성처럼, 화면 크기나 늘이고, 메모리나 늘리고 그런게 아니라.

한국 사람들은 이런 기술은 기술이라고 생각하지 않겠지만…

Open Source implementation of H.265 (HEVC) : X265

ffmpeg patch

The point is that how well you understand your choice of languages, frameworks and platform

I agree with this post on maccrazy’s blog. (It’s in Korean and I don’t translate it for you. I’m sorry for that. )

I’ve worked on an iOS project of which original S/W engineers didn’t understand Objective-C and Cocoa. Also they didn’t understand memory management. They looked to be just like a freshman student ( or even worse ) in CS.
ARC is much better than garbage collection and it replaces the old release/retain/autorelease based memory management. The compiler is smart enough that it can put those automatically for ARC.

However, even though novice-but-pretending-excellent-iOS-programmer-by-passing-interview-questions-by-memorizing-stuffs-not-by-making-those-him/herself can say that they don’t need to understand memory management, every S/W engineers should understand it. Even when they write in C++…. oh, my! I’ve never thought there could be people who write code that badly when I got my first job here in the USA.

I’m sorry but there are too many people who just studied computer languages and his/her interested framework and make things work on a surface. Then people who hire S/W engineers without understanding programming and S/W engineering just tend to judge the candidates by figuring out how many buzz words or terminology they know.

One of the funniest article I read was, something like.. English major has better chance to be a manager, so it’s not necessary to study CS/E.. or something like that..

Well.. however the most fundamental thing is that they need to be out of the box.
S/W engineers who have broad capability tend not to remember some terminology if he passed the level he has to remember those like 10 years ago. The knowledge became melt into him. So, he understand those stuffs better than others. People should not overlook such case. They are really excellent S/W engineers.

How can a manager hire those people? The manager should have profound understanding also.

IBM Blue Gene/Q

IBM Blue Gene/Q supercomputer
5D Torus interconnection

Even one of the highest performance supercomputer is also built with similar architecture with PCs.
Difference is that how to interconnect nodes ( each can be a computer by itself ) for internal connectivity not “Internet” connection.
Surely the architecture design can incorporate faster internal bus and better architecture, faster I/O, high availability, fail-over feature, fail resiliency, etc.

However, that kind of super computer is very expensive. When processing unit was not as fast as current processing unit, they have to design the fastest architecture. However, as you can see from Blue Gene/X diagram, you can practically build cost effective cluster supercomputers or a group of computers.
Surely the maximum performance can be slower than a supercomputer which is designed for maximum performance from the scratch, but because processing unit is fast nowadays, it may not need to require the fastest performance always. If it requires long processing time, distribute the data to some group of computers or  the whole group of computers and let them process it at the same time and collect the result over “TCP/IP” can be good enough.
(This works when required processing time is a lot longer than networking speed. ) Analyzing big data, processing tons of video, or rendering photo-realistic 3D animations can be those examples.
So, cluster solutions like BeoWolf is cost effective choice for choice.

So, it means that there can be some alternative ways to achieve good enough  performance with lower price requirement for certain specific situation.
Even S/W design/architecture affects it and can compromise requirement for low cost but high performance turn-around for submitted tasks.
Instead of working on original uncompressed video data, for example, directly, it can present smaller version of video data, which can be loaded and edited quickly by users. Then any steps of editing like what effect to use, from where to where to cut out, etc can be applied to the original big-sized video data.
It can at least reduce the time to load, edit and play for confirmation. if we consider actual processing time is not only governed by computing power but also human interaction and any job involving operators, working on smaller version of data which represent its big original counterpart can actually overcome low internetwork performance. ( depending on situation )

So, when designing a high performance system and solution, the person should also consider S/W system architecture of the solution as well as H/W architecture. If one of them is omitted, it can be either that the wanted system can’t be designed or that you need to pay a lot to buy a dream machine.

IBM Power 775 supercomputer

This IBM Power 775 supercomputer is not a too big computer, but they call it a supercomputer.

What is interesting is that it has :

  • PC architecture
  • PCIe version 2 x16 ( version 2 of PCIe with 16 lanes )
  • 256 cores

So, it doesn’t even have PCIe version 3.

Compared to that, Apple’s new Mac Pro looks to be cost-effective ( presumably) high performance computer.
Surely the Mac Pro can’t beat this kind of computer. However, PCIe doesn’t look to be too bad for practical usage.
Although the Thunderbolt  2 can support only upto 20Gbps ( well, as a personal workstation, it’s fast enough and actually impressive. ) to support some really niche market, if the data bandwidth for I/O really matters, then you can find out the PCIe in Mac Pro is not too bad for the probable price.
( I don’t think that Mac Pro will be as expensive as IBM Power 775 or even goes over the price range of PC servers. )

Some articles on Thunderbolt 2

Intel : Video Creation Bolts Ahead – Intel’s Thunderbolt™ 2 Doubles Bandwidth, Enabling 4K Video Transfer & Display

AnandTech : Intel’s Thunderbolt 2: Everything You Need to Know

nofilmschool : Intel’s Thunderbolt 2 Now Official, Doubling Bandwidth & Enabling 4K Video Display

Intel Teases 4K Video Transfer, Display with Thunderbolt 2

It’s quite comparable with fiber channel in terms of its technology. They are all based on fiber cable. ( Currently commercially available Thunderbolt is based on copper cables, and Thunderbolt 2 is the same, but originally it was LightPeak by Intel, and the reason they announced it with cooper cable was that they couldn’t find a easy and good interconnection design for consumer market. )

Currently available practical solutions for fiber channel look to be in the range of 8Gb ~ 16Gb.

Then, for Apple, even if they allows the new Mac Pro expansion only through Thunderbolt 2, it’s good enough to support fiber channel speed, because Thunderbolt 2 is up to 20Gbps. However real performance can vary. So, time will tell.

Emulex’s is 16GFC, which is 3200 Mbps == 3.25 Gbps. so, it’s significantly low compared to that of Thunderbolt 2.

Also, when we consider actual performance by considering any manipulation of data by CPU or GPU, we also need to consider interface speed of fiber channel adaptor cards.
Currently PCIe 16 lane speed is :

v1.x: 4 GB/s (40 GT/s)
v2.x: 8 GB/s (80 GT/s)
v3.0: 15.75 GB/s (128 GT/s)
v4.0: 31.51 GB/s (256 GT/s)

So, it’s way faster than Thunderbolt 2.
In Gbps, even the v1.x is 32 Gbps.
So, I wonder if Apple is not to include PCIe slot or any external interface to PCIe for high performance computing market.
I’m pretty sure that any pretty good sized video production business will require PCIe performance. They build their own interface than fiber channel to achieve even faster speed then fiber channel through PCIe. Will Apple give up that market? ( Hmm.. Apple has Final Cut pro which is quite popular in video production market. Oh.. I’ve heard that “video” is kind of word which should not be used in this business, because “video” reminds them of somewhat low quality “movable images”. )

Shouldn’t some one attending at WWDC ask this to Apple engineers and their bosses?

%d bloggers like this: