Apr 29, 2007

Cocoa Application with custom Core Image filter 6: Embedding a Quartz Composer composition directly in an app.

In the last episode, we built a Cocoa application to filter an image using Convolution. This application followed a more traditional path to developing a Cocoa application. We took an image from our view, handed it to the model to be processed and sent it back to the view for display. With the advent of Quartz Composer, this application can be implemented in an entirely different way that opens up all kinds of interesting possibilities for interactive graphics applications. I'm sure all of you have seen that Quartz Compositions can respond to external inputs. Quartz Composer can take input from keyboard and mouse, spotlight image searches, folders of images, and even RSS feeds. Apple's RSS screensaver is a prime example of a Quartz Composition that uses RSS input.

An other way of feeding input and getting output from a composition is by publishing input and output parameters from your compositions. In the most simple sense, these will allow Quartz Composer itself to present a simple UI to allow you to interact with your composition. For example, in my test.qtz composition, I publish the inputImage and all the coefficients so that QC can provide a simple user interface.

More interesting than this, however, is the use of a composition embedded within a QCView and accessing its parameters using Cocoa bindings technology and the QCPatchController. I've provided a link to this project so you can see how I did it: Convolver.zip

The first thing you need to do to use these in your projects is to add the Quartz Composer palette to Interface Builder. To do this:

Open Interface Builder Preferences:

IB preferences showing palettes tab

Click the Add... button:

selecting the QuartzComposer palette

You will get this new palette in Interface Builder.

the Quartz Composer palette

Next, we need to copy the test.qtz composition into the bundle for our application. It doesn't really matter where you put it. For our purposes, we just need to make sure that it is packaged with the app when it ships. After that we need to open the composition and make sure that we publish all the inputs that we care about so that the QCPatchController can access them. In order to do this, you have to right-click on whatever patches in you composition have external inputs or outputs and check off each one you wish to publish. As far as I've been able to tell, there are no shortcuts to this, you have to select each one individually.

Publishing inputs from the compostion

Whatever you name the inputs or outputs at this point is the key that will be used to access that parameter later.

Now that we have a properly set up composition, we have to modify the original project to use the QCView and QCPatchController instead of old-school target/action. First thing to do is to replace the outputImage NSImageView on the right of the main window of the app with a QCView. Then, drag a QCPatchController into the nib file. I set the erase color of the QCView to transparent to avoid the large black square on the right side of the application when it starts.

Next., we'll go ahead and set up the bindings for the project. The QCView's patch binding is bound to the patch key on the QCPatchController.


We set the value binding of the NSImageView to the patch of the QCPatchController with the src.value key path.


Finally, each cell of the NSMatrix gets bound to the correct value in the composition:


The final piece of this setup is to load the composition into the QCPatchController


Now, you can actually test the working application from within IB. If you choose File->Test Interface and drop an image on the left image view you will see the convolved output image displayed on the right. Of course, this is exploring the barest minimum of the possibly utility of Quartz Compositions and QCView.

This has greatly reduced the amount of Objective-C code we need to write to build this application. The entire Convolver class that converted the image to a CIImage, called the filter and reconverted the result back into an NSImage is not needed any more. I still want to be able to select File->Open... to drop down a sheet and open the image, so that's really the only thing I need the ConvolutionController to do at this point. There turns out to be one small catch that I'll point out when I get to it.

Here's the new header for the ConvolverController class:

/* ConvolverController */

#import <Cocoa/Cocoa.h>
#import <QuartzComposer/QuartzComposer.h>
#import <QuartzCore/QuartzCore.h>/span>


@interface ConvolverController : NSObject
{
IBOutlet QCView *resultImage;
IBOutlet NSImageView *sourceImage;
IBOutlet NSWindow *window;
}
- (IBAction)openImage:(id)sender;
@end
Some of you may look at this and wonder why I need a pointer to the resultImage any more since the bindings should take care of it. That's where the catch comes in, as you'll see below.

This is the source code for the class:

#import “ConvolverController.h“


@implementation ConvolverController

- (IBAction)openImage:(id)sender
{
NSOpenPanel *panel = [NSOpenPanel openPanel];
[panel beginSheetForDirectory: nil
file:nil
types:
[NSImage imageFileTypes]

modalForWindow: window
modalDelegate:self
didEndSelector:
@selector(openPanelDidEnd:returnCode:contextInfo:)

contextInfo:nil];
}

- (void)openPanelDidEnd:(NSOpenPanel *)panel
returnCode:(int)returnCode
contextInfo:(void *)contextInfo{

NSArray *files = [panel filenames];
NSString *filename = [files objectAtIndex:0];
NSImage *image = [[[NSImage alloc]
initByReferencingFile:filename]
autorelease];

[sourceImage setImage: image];
[resultImage setValue: image forInputKey: @“src“];
}
@end

This is normal sheet handling code, just like in the previous project, but a little simpler. The catch comes in when I have to call [resultImage setValue: image forInputKey: @“src“] even though you might think that the bindings for NSImageView should automatically be updated, they are not. Apparently it's an already filed bug and this one line of code provides a simple workaround.

So, that ends the journey of building a Cocoa app with a custom Core Image filter. Hope it was useful to you. This was a very simple example, there's so much more that can be done with these amazing technologies. Until next time, happy coding!

Apr 10, 2007

Cocoa application with custom Core Image filter 5: calling the filter from a Cocoa app.

Last time in the village we packaged our convolution filter as an image unit. Since the filter was executable, we had to develop an Objective-C class that inherited from CIFilter and implemented a few methods. This time we will see how we can call our image unit from a Cocoa application. This technique will generalize to any other Image Unit in the system, built in or otherwise.

For the purposes of testing, I've created a simple Cocoa application called Convolver. As you can see, Convolver has one main window with two NSImageViews and a separate pallette with a 3x3 NSMatrix of coefficients. Dropping an image on the left view causes the right view to update with a processed version of the left image. Changing the coefficients immediately causes the filter to run and change the rightmost image.

The structure of this application is pretty simple. There is an application controller called ConvolverController and a very simple model (Convolver) which only takes an unprocessed NSImage with an array of coeficients and returns a processed one. Taking a look at the nib file there are the user interface objects, the controller and the model all instantiated in the nib.

The inspector for the ConvolverController object shows the outlets for the controller.

The sourceImage, convolutionMatrix and the File->Open menu all have their targets set to the convolve: method of the controller, so that opening an image or changing the coefficients of the matrix will cause the image to be processed again.

The Convolution controller class is very simple. Here's the interface for that class

/* ConvolverController */

#import <Cocoa/Cocoa.h>
#import “Convolver.h“

@interface ConvolverController : NSObject
{
IBOutlet NSMatrix *convolutionMatrix;
IBOutlet Convolver *convolver;
IBOutlet NSImageView *resultImage;
IBOutlet NSImageView *sourceImage;
IBOutlet NSWindow *window;
}
- (IBAction)convolve:(id)sender;
- (IBAction)openImage:(id)sender;
@end

and here's the implementation:

#import “ConvolverController.h“

@implementation ConvolverController

- (IBAction)convolve:(id)sender
{
NSImage *source = [sourceImage image];
NSImage *dest = [convolver processImage: source
withCoefficients: [convolutionMatrix cells]];
[resultImage setImage: dest];
}

- (IBAction)openImage:(id)sender
{
NSOpenPanel *panel = [NSOpenPanel openPanel];
[panel beginSheetForDirectory: nil
file:nil
types: [NSImage imageFileTypes]
modalForWindow: window
modalDelegate:self
didEndSelector:
@selector(openPanelDidEnd:returnCode:contextInfo:)

contextInfo:nil];
}

- (void)openPanelDidEnd:(NSOpenPanel *)panel
returnCode:(int)returnCode
contextInfo:(void *)contextInfo{

NSArray *files = [panel filenames];
NSString *filename = [files objectAtIndex:0];
NSImage *image =
[[[NSImage alloc]
initByReferencingFile:filename] autorelease];

[sourceImage setImage: image];
[self convolve: self];
}
@end
The convolve: method is called whenever the source image or coefficient matrix is changed. Notice that I pass the matrix cells unaltered to the model. At first I thought I would pull the information out of these cells and pass an array of NSNumber, but then I decided that I would just make sure that the model would take the floatValue: of whatever input it got to make sure I got the correct input type. Without strong typing, it seemed that I would have to do this in the model anyway, so I just do it there. The openImage: method opens a sheet to allow the user to select an image, and the openPanelDidEnd:returnCode:contextInfo: method sets the image file to the source image and calls convolve:

The implementation of the Convolver class and the processImage:withCoefficients: method are the most important part of this exercise. Here's the header:

/* Convolver */
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>

@interface Convolver : NSObject
{
CIFilter *convolution;
NSDictionary *filterAttributes;
CIContext *context;
}
-(NSImage *)processImage:(NSImage *)image
withCoefficients:(NSArray *)coefficients;

@end

and here's the implementation:


#import “Convolver.h“

@implementation Convolver

-(id)init
{
if( self = [super init] ){
[CIPlugIn loadAllPlugIns];
convolution = [CIFilter filterWithName:
@“Convolution3by3“];

[convolution retain];
filterAttributes = [[convolution attributes]
retain];

}
return self;
}
The init method loads all the Core Image plugins in the system, and gets the Convolution3x3 filter from the system. We are not going to use the attributes but you can access them and use them if you wish by using the attributes message on the filter you load.

The only other method in this class is processImage:withCoefficients and I'll break it down for you a step at a time.

-(NSImage*)processImage:(NSImage*)image
withCoefficients:(NSArray*)coefficients

{
if( context == nil ){
context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext]
graphicsPort]
options:nil];
[context retain];
}

The first thing we need to do is to get a Core Image graphics context which we get from our current graphics context.

Now, since we are using an NSImageView we need to convert the NSImage within into a CIImage. The way I am doing this is by using the initWithFocusedViewRect: method to get a bitmap representation of the NSImage and then using that bitmap to initialize a CIImage object with that bitmap. Personally I'm not sure why there are multiple image types in Cocoa (I'm sure someone at Apple would have the reason) but it's just something we have to deal with.

NSSize size = [image size];
[image lockFocus];

NSRect imageRect =
NSMakeRect(0, 0, size.width, size.height);


NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithFocusedViewRect:imageRect];

[rep autorelease];
CIImage *bitmap = [[CIImage alloc]
initWithBitmapImageRep: rep];

[bitmap autorelease];
[image unlockFocus];

The next step is to set the parameters for the filter. We first call setDefaults to get the parameters in a known good state in case we don't want to set all of them. Core Image uses Key-Value coding to set all its parameters. Apple uses this technology so often, and it has turned out to be so useful for all kinds of applications that I don't know what we did before it. One minor annoyance is that we have to make NSNumber objects for each of the float parameters since Cocoa doesn't have any sort of automatic coercion like there is in Java 1.5.

[convolution setDefaults];
[convolution setValue:bitmap
forKey:@“inputImage“];


NSArray *keys = [NSArray arrayWithObjects:
@“r00“, @“r01“, @“r02“,
@“r10“, @“r11“, @“r12“,
@“r20“, @“r21“, @“r22“, nil];

NSEnumerator *en = [keys objectEnumerator];
int i = 0;
NSString *key;
while( key = [en nextObject] ){
NSNumber *param =
[NSNumber numberWithFloat:
[[coefficients objectAtIndex:i++] floatValue]];

NSLog(@“key %@ index %d value %@“, key, i-1, param);
[convolution setValue: param forKey: key];
}

Finally, we get the value for the “outputImage” key which calls the outputImage method in the filter class and actually produce the result.

CIImage *result =
[convolution valueForKey:@“outputImage“];


Now we have to convert back to a NSImage. Unfortunately from what I can tell, there's no way to just get a bitmap representation out of a CIImage object. If anyone knows of a better way to do this, please leave a comment! So, we draw the CIImage into our NSImage object and return it.

NSImage *outputImage =
[[[NSImage alloc] init] autorelease];

[outputImage setSize: size];

[outputImage lockFocus];
[result drawInRect: imageRect
fromRect: imageRect
operation: NSCompositeSourceOver
fraction:1.0];
[outputImage unlockFocus];

return outputImage;
}
@end

That's the end of this tale. There is another angle on this same problem, however. Instead of converting images and calling filters, we can embed the Quartz Composer composition we developed as a test directly into our application by using a QCView and controlling our composition using the QCPatchController. Next time we'll reimplement this app using those techniques.