New mechanism for high performance computing on Snow Leopard

Just like the previous post, I would like to put links for this subject and let’s think about it.

Currently, there is one thing I’m really curious about : How are they interrelated?
For example, Grand Central is to distribute jobs to multiple cores of CPUs, while OpenCL is to distribute jobs to CPUs and GPUs. Then, if a programmer want to achieve some parallelism transparently and don’t care about whether the job is done on GPU or CPU, what mechanism works on Snow Leopard? Currently it seems to me that you should write in OpenCL or Grand Central. ( Some will be supported transparently, though. ) Then how you will decide if your function is better to be performed by GPU or CPU?

How about NSOperation and NSThread are interrelated? How is OpenMP going to be used? Does NSOperation uses OpenMP internally? Or OpenCL or GrandCentral uses the OpenMP internally?

Let’s think about it after reading the articles above.

2 responses to this post.

  1. 2200

    If it’s bit-work (images, video, sound, encryption; that kind of thing), it’ll probably work really really well on the GPU, and you should use OpenCL. If it’s any kind of I/O, or if you just want to parallelize some code you already have, or you want the easiest solution while still doing stuff asynchronously, use OpenCL.

    NSThread is just a wrapper around pthreads. NSOperation is built on top of GCD. OpenMP isn’t used anywhere (not by GCD, not by NSOperation, not by OpenCL), as GCD is more mac-like, and faster. See http://macresearch.org/cocoa-scientists-xxxi-all-aboard-grand-central for a comparison.

    Reply

    • Posted by jongampark on September 12, 2009 at 9:47 PM

      Hi, Joachim. Thanks for leaving your comment.

      Well, everything on computers are bit-work. So, probably describing why and what can be good candidate for OpenCL that way may meaningful to general users, but not us, programmers. Actually Apple’s own document describes about it. After reading it briefly, what I found out was that OpenCL is for GPU. Although wikipedia says it is for distributing thing on CPU and GPU, but it is for GPU. To make things faster, CPU vendors have introduced their solutions like MMX, SSEx and Altivec. They are SIMD instruction sets and usually used by multimedia programming like calculating FFT, convolution and so on. MPEG implementation can get great benefit from using it. On the other hand, video card manufacturers have made their graphics “CONTROLLER” better, and now they are called PROCESSING UNITs not CONTROLLERs. So, the GPU now have very powerful calculation capability and they are usually for video media. So, it will be faster to do calculation for texture mapping or FFT using GPUs’ instruction than CPU’s SIMD instructions.

      However, problems are that different manufacturers prepared different instruction sets. So, we have dilemma here. You should write code for doing the same thing for all graphics card or CPU architecture you want to support. Then you need to learn many different instructions and test each. So, people started to have an idea to introduce a layer which is common. Inside of it, each functions may use different instruction sets. Apple’s Accelerator framework does so. MS’s intrinsic functions are designed to do so. MS doesn’t support inline assembly any more. Although they still supports MASM x64 but it is not recommended. They recommend intrinsic functions. Same thing happens to GPU. OpenCL is such a layer. No matter what GPU you have, if there is a interface, bridge or whatever which maps OpenCL’s command to each GPU architecture’s instruction, the GPU is supported by the same and common OpenCL.

      So, OpenCL is for GPU.

      For NSOperation, we can’t say that it is built on top of GCD ( I hate this abbreviation. It is confusing. Grand Central Dispatch or Greatest Common Divisor?)
      On Leopard which doesn’t have GCD, there is the NSOperation. I think it utilized thread ( whether it is pthread directly or NSThread indirectly).
      However, with Snow Leopard they introduced a new concept called block or closure, they seem to start using it in the Grand Central Dispatch.
      Or who knows? In some part of it they may use OpenMP. Actually NSOperation is kind of applying Pattern to multithread programming. So, whether they use block, pthread, or OpenMP, NSOperation can be implemented. However, block seems to more well suited for multicore programming and it may relieves managing synchronization variables in many cases.

      I will write more about those.
      Most of blogs and sites mention about them, but I didn’t find any which clearly explains how they are different and why new concept was needed.
      I will try to explain for such situation. But.. I may not have enough time to write or my English is not enough good to explain what I know and think efficiently and fully. :)

      Thank you again for leaving your comment. It is always nice to talk about interesting issues with others! That is one of many reason I maintain this blog! :)

      Reply

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: