For the purposes of testing, I've created a simple Cocoa application called Convolver. As you can see, Convolver has one main window with two NSImageViews and a separate pallette with a 3x3 NSMatrix of coefficients. Dropping an image on the left view causes the right view to update with a processed version of the left image. Changing the coefficients immediately causes the filter to run and change the rightmost image.
The structure of this application is pretty simple. There is an application controller called ConvolverController and a very simple model (Convolver) which only takes an unprocessed NSImage with an array of coeficients and returns a processed one. Taking a look at the nib file there are the user interface objects, the controller and the model all instantiated in the nib.
The inspector for the ConvolverController object shows the outlets for the controller.
The sourceImage, convolutionMatrix and the File->Open menu all have their targets set to the convolve: method of the controller, so that opening an image or changing the coefficients of the matrix will cause the image to be processed again.
The Convolution controller class is very simple. Here's the interface for that class
/* ConvolverController */
#import <Cocoa/Cocoa.h>
#import “Convolver.h“
@interface ConvolverController : NSObject
{
IBOutlet NSMatrix *convolutionMatrix;
IBOutlet Convolver *convolver;
IBOutlet NSImageView *resultImage;
IBOutlet NSImageView *sourceImage;
IBOutlet NSWindow *window;
}
- (IBAction)convolve:(id)sender;
- (IBAction)openImage:(id)sender;
@end
and here's the implementation:
The convolve: method is called whenever the source image or coefficient matrix is changed. Notice that I pass the matrix cells unaltered to the model. At first I thought I would pull the information out of these cells and pass an array of NSNumber, but then I decided that I would just make sure that the model would take the floatValue: of whatever input it got to make sure I got the correct input type. Without strong typing, it seemed that I would have to do this in the model anyway, so I just do it there. The openImage: method opens a sheet to allow the user to select an image, and the openPanelDidEnd:returnCode:contextInfo: method sets the image file to the source image and calls convolve:
#import “ConvolverController.h“
@implementation ConvolverController
- (IBAction)convolve:(id)sender
{
NSImage *source = [sourceImage image];
NSImage *dest = [convolver processImage: source
withCoefficients: [convolutionMatrix cells]];
[resultImage setImage: dest];
}
- (IBAction)openImage:(id)sender
{
NSOpenPanel *panel = [NSOpenPanel openPanel];
[panel beginSheetForDirectory: nil
file:nil
types: [NSImage imageFileTypes]
modalForWindow: window
modalDelegate:self
didEndSelector:
@selector(openPanelDidEnd:returnCode:contextInfo:)
contextInfo:nil];
}
- (void)openPanelDidEnd:(NSOpenPanel *)panel
returnCode:(int)returnCode
contextInfo:(void *)contextInfo{
NSArray *files = [panel filenames];
NSString *filename = [files objectAtIndex:0];
NSImage *image =
[[[NSImage alloc]
initByReferencingFile:filename] autorelease];
[sourceImage setImage: image];
[self convolve: self];
}
@end
The implementation of the Convolver class and the processImage:withCoefficients: method are the most important part of this exercise. Here's the header:
/* Convolver */
#import <Cocoa/Cocoa.h>
#import <QuartzCore/QuartzCore.h>
@interface Convolver : NSObject
{
CIFilter *convolution;
NSDictionary *filterAttributes;
CIContext *context;
}
-(NSImage *)processImage:(NSImage *)image
withCoefficients:(NSArray *)coefficients;
@end
and here's the implementation:
The init method loads all the Core Image plugins in the system, and gets the Convolution3x3 filter from the system. We are not going to use the attributes but you can access them and use them if you wish by using the attributes message on the filter you load.
#import “Convolver.h“
@implementation Convolver
-(id)init
{
if( self = [super init] ){
[CIPlugIn loadAllPlugIns];
convolution = [CIFilter filterWithName:
@“Convolution3by3“];
[convolution retain];
filterAttributes = [[convolution attributes]
retain];
}
return self;
}
The only other method in this class is processImage:withCoefficients and I'll break it down for you a step at a time.
-(NSImage*)processImage:(NSImage*)image
withCoefficients:(NSArray*)coefficients
{
if( context == nil ){
context = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext]
graphicsPort] options:nil];
[context retain];
}
The first thing we need to do is to get a Core Image graphics context which we get from our current graphics context.
Now, since we are using an NSImageView we need to convert the NSImage within into a CIImage. The way I am doing this is by using the initWithFocusedViewRect: method to get a bitmap representation of the NSImage and then using that bitmap to initialize a CIImage object with that bitmap. Personally I'm not sure why there are multiple image types in Cocoa (I'm sure someone at Apple would have the reason) but it's just something we have to deal with.
NSSize size = [image size];
[image lockFocus];
NSRect imageRect =
NSMakeRect(0, 0, size.width, size.height);
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithFocusedViewRect:imageRect];
[rep autorelease];
CIImage *bitmap = [[CIImage alloc]
initWithBitmapImageRep: rep];
[bitmap autorelease];
[image unlockFocus];
The next step is to set the parameters for the filter. We first call setDefaults to get the parameters in a known good state in case we don't want to set all of them. Core Image uses Key-Value coding to set all its parameters. Apple uses this technology so often, and it has turned out to be so useful for all kinds of applications that I don't know what we did before it. One minor annoyance is that we have to make NSNumber objects for each of the float parameters since Cocoa doesn't have any sort of automatic coercion like there is in Java 1.5.
[convolution setDefaults];
[convolution setValue:bitmap
forKey:@“inputImage“];
NSArray *keys = [NSArray arrayWithObjects:
@“r00“, @“r01“, @“r02“,
@“r10“, @“r11“, @“r12“,
@“r20“, @“r21“, @“r22“, nil];
NSEnumerator *en = [keys objectEnumerator];
int i = 0;
NSString *key;
while( key = [en nextObject] ){
NSNumber *param =
[NSNumber numberWithFloat:
[[coefficients objectAtIndex:i++] floatValue]];
NSLog(@“key %@ index %d value %@“, key, i-1, param);
[convolution setValue: param forKey: key];
}
Finally, we get the value for the “outputImage” key which calls the outputImage method in the filter class and actually produce the result.
CIImage *result =
[convolution valueForKey:@“outputImage“];
Now we have to convert back to a NSImage. Unfortunately from what I can tell, there's no way to just get a bitmap representation out of a CIImage object. If anyone knows of a better way to do this, please leave a comment! So, we draw the CIImage into our NSImage object and return it.
NSImage *outputImage =
[[[NSImage alloc] init] autorelease];
[outputImage setSize: size];
[outputImage lockFocus];
[result drawInRect: imageRect
fromRect: imageRect
operation: NSCompositeSourceOver
fraction:1.0];
[outputImage unlockFocus];
return outputImage;
}
@end
That's the end of this tale. There is another angle on this same problem, however. Instead of converting images and calling filters, we can embed the Quartz Composer composition we developed as a test directly into our application by using a QCView and controlling our composition using the QCPatchController. Next time we'll reimplement this app using those techniques.
1 comment:
One of my readers pointed out to me that he read an article on converting between NSImage and CIImage on Dan Wood's blog. This seems to me like a much better solution than mine, and so I send thanks out to WL and Dan Wood as well.
Post a Comment