Friday, February 6, 2015

Arguably, Greece was a mess before joining the euro, and only got in due to the hubris of the euro-federalists, desperate to build a European superstate (for various reasons, although breaking the hegemony of the Anglo-Saxon power base was probably one of the leading motivators from the then-powerful Gaullists). 

Once Greece made their real 8.3% deficit vanish into a mere 1.5% one, through accounting sleight of hand with the help of Goldman Sachs, they were welcomed into the Euro.  At that point they were flooded with cheap money and felt no further need for restructuring of their baroque economy, full as it was with anachronisms and vested interests.  Wind the clock further forward and their deficits ballooned wildly as old habits persisted.  

Once the shoe dropped about Greece’s serious sovereign debt situation, the European powers realized just how exposed their banks were to a Greek default.  It became essential to unwind the situation and give banks a chance to deleverage.  Only public money could achieve this without significant (supposed disastrous) haircuts on private funds - deemed to have the potential of devastating the world (or at least the western) economy.  Consequently, public funds were funnelled into Greece, but not as a bail-out per se, mostly merely as loans with aggressive requirements for Greece to restructure internally in order to create a modern and more productive economy possible of producing a primary surplus.  These 'austerity conditions' have been applied and policed by external agencies - exerting draconian control over the domestic politics of Greece and its people. 

The problem with this medicine has been foreshadowed by the effects of the Versailles Treaty on Germany after WWI.  Versailles was a disaster in the long-term because it let the leaders off scot-free, while punishing the ordinary people with measures that created extreme economic hardship.  Contrast this with WWII, where the allies punished the leaders and let the people off free.  Is it any surprise that history is repeating itself in terms of the rise of nationalistic, even extreme nazi parties in Greece?  Banking, big business and the political leaders who created a calamity are once again protected, even rewarded, while the people suffer the ignomies of a heavy financial yoke: loss of jobs, loss of social support, loss of dignity.  

People can bear extreme oppression and hardship when they believe there is something good to fight for (consider the Blitz sprit in London), but fighting to protect the global banking system and Greek oligarchs is far from that.  Once again the optics are terrible.  Where is the justice?  How can the people who screwed up so badly still be in charge and telling the general populace that they will have to pay the price of the grievous mistakes made by the leadership and elite?  Of course, you can argue that there is collective responsibility (e.g. if people vote to have enormous pensions at a retirement age of 58, there is a large cost involved), yet it is surely a responsibility of political leaders to present hard-truths, be wise and take critical action before disaster is banging down the front door.  At the very least, our leaders (and the most wealthy) enjoy tremendous privileges most of the time, why should they be immune from the consequences of their poor judgement?  While you would think natural law would require that leaders don’t enjoy benefits after gross mistakes, it is a travesty how much this actually happens.  Still, the fortunes of most countries are such that leaders can ultimately be swept away (if not exactly punished) for their incompetence and perhaps the populace can at least believe that new leaders might show better judgement - at least for a time.  However, there are times when societies are driven to such extremes that these things really matter - when people are pushed down Maslow’s hierarchy - dignity is lost, injustices are glaring.  This is when people, in desperation, look to form new allegiances, new social pacts and contracts that they can believe in at least, to sweep away the pain that they are enduring.

The only defence possible for the actions of the powers-that-be in the Greek situation, is that they were attempting to prevent an even more catastrophic and calamitous outcome on many, many more economies around the world through a deep depression caused by the twin shocks of the subprime credit crunch and a Greek default.  In a sense, they didn’t want their own poor decisions (to invest in an already profligate and bankrupt Greece) to come home to roost.  They were prepared to kick the can down the road, by using public funds and funds borrowed against the still good credit of other countries, in order to prevent the effects of incipient failure in their own private banking systems.  As we can now see quite clearly, this has merely bought just a little time.  The bomb is still primed to go off, they just hope that they’ve erected enough of a firewall around Greece to deflect the primary force of the explosion back onto its own people.  The only thing we don’t really know yet is the nature and composition of the explosion: political, social, financial.  Explosions are by their very nature chaos in rapid motion.  Things that could not be rearranged in an orderly fashion will now find new configurations quickly, and the new configurations will not necessarily be ones that anyone planned in the events leading up to the event - the law of unintended consequences will be in full force.  What humanity has completely failed to learn is that the law of cause and effect is truly immutable and you cannot hide the liabilities you create (whether these are financial or social).  Our leaders are predisposed to putting off paying down mistakes and taking the consequences as long as possible, but that just means that the compound interest will be due on those consequences.  Evidently we all learn how to do 'dodgy accounting' at a young age: lies, cover-ups, misdirection and that scales up our largest social, economic and political systems.  The more our leaders are able to veil truths from the people, the more they will be able to build up such gross distortions and use them to their own ends.  We must find a way to let real light into these situations and to hold our leaders to account much more quickly and effectively - that implies changes to the fourth estate and to democracy as a whole.

Saturday, February 21, 2009

...and then two come along at once (a.k.a. Core Data travails)

Per previous posts, I really like what the fine people at Apple have done with Core Data, in general.

One small thorn in the side though is working with Core Data in a multi-threaded environment.
Now, the docs make it quite clear that there are some limitations and choices to be made. Managed Object Contexts (the in-memory state of your Core Data data set) are not thread-safe. The documentation highlights a number of options basically involving having a sort of apartment model for MOCs. However it then goes on to say that, although discouraged, careful locking can be used to allow a MOC's managed objects to be passed between threads and used. The 'careful' locking in this case would include not just locking the MOC instance when you are executing queries (fetch requests), but also any time you explore an managed object's properties (because this can cause faulting on the object and automatic fetching of more detail).

So, given the discussion on locking, that I felt I understood even if it had to be applied diligently and completely, I had decided to take a 'shared MOC with extensive locking' approach. The approach seemed to be working out (despite the bracketing locks dotted all through the code to protect any access to managed object state changes - even through reading). However, a nasty problem emerged - a hang while updating a table in the UI. Yesterday, investigation revealed that a background thread had acquired the MOC lock in order to perform a query, and that the main thread was drawing the table and was waiting to acquire the lock on the MOC in order to read a string attribute on a managed object. A deadlock was occurring because the act of querying the MOC in the background thread caused something in Core Data to write data to the table, causing it to want to update, but an internal lock on changes to the table view was not obtainable because the table was repainting in the main thread. Urg.

So what was causing the MOC to send a data update to the table (via its controller)? Apparently, this is what it does when the table is bound (Cocoa bindings) to model data via a controller object. In other words, apparently you cannot use MOC locking to facilitate multi-threaded use of Core Data if you are using binding at all. At least I think that's the practical upshot of all of this (i.e. it may be possible to find hooks in the updating/drawing code in all UI objects bound to Core Data such that further locks could be inserted to prevent such a deadlock - but clearly this approach is coming apart at the seams. This is presumably one of the reasons Apple suggests locking is "strongly discouraged" (though AFAICS they don't elaborate on the nature of beast that lurks if you try it).

Once the full realisation had hit, naturally the question remained as to what to do about it. The apartment model is one approach, but we require near instantaneous updating our our UI as the model changes in thread's MOC, and it is (currently) unknown what the effect would be (nor how to best set up a MOC-per-thread: where to store references to the MOCs and whether any manual committing/synchonisation and is required). For the meantime I elected to back out to the simplest approach of having only the main thread actually make ANY use of the MOC. However, at this stage we have several bits of code running in background threads that were locking the MOC and proceeding to do fetches and use managed objects. What was the easiest way of having that code execute on the main thread, while the code around it continued to be running on a background thread?

To punt some kinds of method invocation to the main thread, Cocoa provides - performSelectorOnMainThread:withObject:... style methods. Like the other methods in the "performSelector" family, these are fine for one-way message sends with only one or two arguments. These methods can never return a value, which is also a common case, and they are limited to object parameters (e.g. no scalar values). This limitation got me wondering what it would take to build something that would marshal potentially any message to another thread, and deal with the return value correctly. Of course, it turns out that this already exists in the form of Cocoa Distributed Objects - but then the challenge was to come up with something that worked effectively in the case of punting an instance method message send to the same object on the main thread.

Here's what I came up with:
#define MISELF \
NSConnection *miself_service; \
id miself;

// Initialise miself ivars on main thread, wait to ensure complete
#define MISELF_INIT \
[self performSelectorOnMainThread:@selector(miselfServeInstanceFromMainThread) withObject:nil waitUntilDone:YES];

// Initialise miself service and set ivars to connection and proxy
- (void)miselfServeInstanceFromMainThread { \
NSString *serviceName = [self description]; \
miself_service = [NSConnection serviceConnectionWithName:serviceName rootObject:self]; \
miself = [NSConnection rootProxyForConnectionWithRegisteredName:serviceName host:nil]; \

These three macros define a little toolkit for converting any class to provide inter-thread thunking for its instance methods.
To set up a class, you put the MISELF macro in the ivars declaration (in the class interface), the MISELF_INIT macro into the -init method or methods, as required to have it always called once on any object initialisation, and the MISELF_METHOD goes anywhere you would start a method definition in the implementation block.

With these in place, the power of Cocoa Distributed Objects is available to punt method invocations (from within the class) to the main thread by using this form:
[miself doSomethingWith:@"Marvin" andThisNumber:42]
...instead of:
[self doSomethingWith:@"Marvin" andThisNumber:42]

At the very least, this has made some of the remedial work, of getting code that runs in a background thread to do its Core Data business on the main thread, rather easy. I still have to learn whether the MOC-per-thread approach would the right way to go ultimately, but as least I've just ridden out of Deadlocksville. Hopefully I won't be back for a while.

P.S. Apple developer documentation ranges from really excellent to the, well... less than really excellent. Overall, I'd say the Core Data docs are better than average, but they could still do with explaining multithreaded options much more clearly, particularly when it comes to how exactly one would typically create the MOC-per-thread scenarios. Also, while the locking option is certainly "strongly discouraged", there is no elaboration as to what this means, and in fact the conversation winds on to talking about how locking can be implemented. Given the importance of binding in Cocoa these days, there really should be some specific mention of the (effective) incompatibility of binding with locking (in an attempt to deal with multiple threads - the only reason for locking in the first place).

No Toot for a long time

Well, I should probably pick this blog up again. The hiatus has been filled with further learning and a measure of head scratching as I've continued to spend time getting to grips with Apple's Cocoa. It's been over a year since I started to play with it, and as might be expected I've come a long way, but feel like I have so much further to go. I don't remember ever feeling like I had so much to learn when faced with other frameworks (such as Java JFC, AWT and Swing). This is surely because Cocoa is such a rich framework and much higher-level than most others - or perhaps I'm just getting a little bit older and have killed too many brain cells with beer? Anyway, maybe a little retrospective "A year in the Mocha" could be forthcoming to capture some of the highs and lows in my Cocoa journey this last year.

Wednesday, February 27, 2008

Functional Programming, the opiate of some people (or... Tripping on CAL)

If you don't like opinion pieces on the joys of particular programming styles/languages - don't read on ;-)

It has been months since I wrote any functional code in earnest. Most recently I have been busy coding in Objective-C, and I've a long history with Java, C++ and C, with the latter language being used for many years in conjunction with a very elegant home-brew object-like framework.

As something of a student of language technology: principles and practice, I usually follow all the news and buzz to be found spread around the infowebs, in such places as Lambda the Ultimate, various blogs and language mailing lists (such as Ruby-core).

Just today, I had reason to fire up the CAL tools again in order to knock up some prototypes. The overall experience was surprising to me, in terms of the effect that producing a program in a (lazy) functional language actually had on my psyche. Having given functional programming a rest for the several months, I was almost "accosted" by the flavours of functional programming afresh. Being one with a reasonable grasp of the concepts, I naturally did not have the learning curve that one would experience from scratch (i.e. this is not an ab initio experience), yet the hiatus was sufficient to being the differences between functional and procedural/OO programming into sharper relief, and evidently to tickle my brain anew in areas that I had forgotten were differentiated for the kind of cognition that you do when crafting a functional solution.

I was using the combination of CAL's textual and graphical development tools (the Eclipse plug-in and the Gem Cutter to be precise). I have erstwhile found that the combination of these two can be very potent, especially in the exploratory stages of development - when you are surveying the problem domain and starting to form hypotheses about the structure of a solution.

Once I had completed the tasks at hand, I sat back, and was aware of the 'buzz' that I was feeling. This is the buzz you get from general problem solving success, presumably a literal reaction to endorphins released (I assure you there were no other overt substances involved, unless you could my normal coffee intake that morning!). Thinking about why I was feeling so chipper, I surmised that it was for two reasons:
1. A strong sense of puzzle solving.
2. A sense of the elegance of the emergent solution.

On the first point, it occurred to me that the whole exercise of assembling a functional solution is, for the most part (that is, ignoring some of the operational issues), one of dealing directly with fitting shapes together. This is of course an analogy, but what is going on in my head feels like the assemblage of shapes - fitted together perfectly to create the required abstracts. Of course, this is nothing more than a whiff of mathematics in general - the perception of structures and order, and formal assemblies of such things. I think the whole perception of 'shape' is made even more tangible by the ingredient of having the visual development environment alongside the syntactic one - two kinds of symbols to aid the perception of abstracts. In CAL, the visual environment is also much more interactive with a lot of information-rich feedback (rollover type inferencing on any argument/output, popup lists of proposed functional compositions etc.). I suppose that this sense of "assemblage" is at least partly responsible for the strong sense of puzzle solving. One experiences similar sentiments having designed a nice OO pattern/architecture, but not in the same way.

To the second point, concerning elegance, this is something that is strongly related to the way symbols have such a direct relationship to the semantics of functional programs. Any part of a CAL program that actually compiles, makes a formal and precise (though not necessarily narrow) contract with the rest of the world as to the quantum of processing it contributes. Part of the elegance comes from the richness of the type system and the sets of types of values that can be described as forming this contract. Another part, however, comes from the fact that the function (as a package of processing) is a highly flexible unit. The contract that functions make in a lazy functional language concern the relationships between the input and output values, but these relationships can be composed into larger structures in any way that satisfies the strictures of the type system. Elegance, though is merely a perceived quality, what is more important is how the manifestation of elegance is related to practical effects on the economics of software development.

At a low level, this manifests as a beautiful ability to build abstracts through binding arguments in any order as larger logical structures are created. In other words, the way in which you abstract is fluid, and the little semantic package of the single function can be utilised in so many different ways, compared to the strict notion of a 'call' in procedural languages.

At a high level, this behaviour results in the functional packages being able to be combined (and importantly, reused) very powerful ways, with a direct bearing on the way the intended semantics can be conjured, but always under the watchful eye of the type system - that is able to act in both a confirming and informational way. The latter can feel like epiphany. Many times have I been focussed on a specific problem, and composing functions for that purpose, only to have the compiler's inferred type tell me that I've produced something much more general and powerful than I had thought (sometimes it even tells me that I've been stupid in overlooking that function already in the library that does exactly what I'm trying to do again!).

Today's fun with CAL had all these facets. The qualia of functional programming is quite different to that of OO programming and in many ways you are much more constrained than the latter. Good OO design is certainly critical to creating correct, efficient and maintainable software, and while there is therefore a real spectrum of 'better' and 'worse' designs/implementations for a given problem, much of the structure of an OO program itself is informal to the implementing language and lives in the minds of the developers who created it (and perhaps in their technical documentation). The reasons why certain encapsulations were chosen over others, and why certain code paths/sequences are established are undoubtedly built on reason, but they become major artifacts of the particular solution. In the functional world, things are both more sophisticated and more simple at the same time (naturally, 'complexity' has to go somewhere). Functions are not the atomically executed entities of the procedural world, and their algebraic composition is a very powerful method of abstraction, as described earlier. The type system is much more pervasive and complete, which is a double-edged sword: it forces you to think about the realities of a problem/solution much earlier (which feels like constraint), but it also enables the aforementioned much more meaningful 'conversation' with the compiler. The up-front requirement to conform to formal boundaries in expressing a solution costs a little more at the beginning, but pays back handsomely downstream - both in terms of the earlier/deeper accrued understanding of the problem domain, but also the much higher likelihood that better abstractions have been selected. As any first year Comp Sci undergraduate knows, the costs of correcting bad assumptions/design later in the software lifecycle are much higher than earlier. There are still choices about encapsulation in functional languages (which modules to create, how to deliver functionality in appropriately sized quanta to allow requisite reuse and generality of components), but the packets of semantic themselves, and the manner of their abstraction is far more fluid. The denotational quality of the syntax also has value for reasoning too, but that's another kettle o' fish.

At the end of the day, any developer will get a buzz out of using a tool that allows rapid manifestation of a good solution to the problem at hand (by some definition of "good"). The qualities of the functional approach however imbue a certain concision and confidence to the construction, and with the type system, really appear to have a pseudo-physical quality of "inserting the round peg into the round hole". So it is (I think) that when you stand back from the 'model' you have just assembled, there is a much more tangible quality to the functional construction - that it has been assembled from shapes, and that those shapes in turn had a robust definition. The whole model has been 'validated' by the type system (as an instance of a theorem prover), and you are now the proud owner of some new 'shape' with its tractable, testable, semantic, and its ability to be further glued into an even larger (and impressive?) construct, with some degrees of freedom about which vertices and edges 'stick out' to be perceived on the surface of the larger creation.

Whatever, dear reader, you may adjudge as the real reasons for my trippy experiences, I'm guessing that most developers who take the time to really understand what functional languages offer are likely to come away from the experience (and hopefully some real practice) appreciating some aspects of the foregoing. I'm not personally one of those who would use a functional language for every problem (at least the current batch, with the current libraries available, to say nothing of available developer skill sets), but I'm beyond persuaded as to the very real advantages they offer on certain problems, as a component of applications. Perhaps it is a growing appreciation of this sentiment that is driving the apparent uptake of what I'll loosely call 'functional concepts' within mainstream languages, or extensions thereof. Lambdas/blocks, closures etc. have appeared in Python, Ruby, Scala, C#/LINQ and (maybe) Java. It will be fascinating to see how these fare and evolve as embedded features of languages that are centred around long standing procedural concepts. Certainly these features allow, and even encourage, a more functional style for use where appropriate. However, the basic tenets of a language are a critical factor, and so far these languages are a far cry from the algebraically typed world, combinatorial construction and semantics of a modern lazy functional language.

Right. Back to the OO world now then...

[glow fades away]

Tuesday, February 26, 2008

Xcode 3.1 - come quickly!

As much as I've grown to really like much about Xcode and friends, unsurprisingly for a 'point zero' release, there are many annoying foibles, OK bugs, to be found lurking.

I'm quite sure that Apple's reasoning about the release of Leopard would not have extended to the quality of its developer tools - beyond fixing the known critical bugs of course, Leopard was not going to be held up by a lack of complete polish in software that only developers care about. This really shows in what must be the number of medium and lower severity issues that remain in the 3.0 tool suite. A lot of these problems are undeniably jarring, with potential costs in working around the problem (or disappointment in having to avoid a feature that would be convenient or improve productivity) - but they have clearly done a good job of removing the crashers.

Nevertheless, as has been mooted in other places, it is now four months since the release of Leopard and we have yet to have an incremental release of the Developer Tools. In all likelihood an update is imminent as a part of the upcoming iPhone development kit, so perhaps we don't have long to wait. I can only hope that the Xcode tool development group at Apple weren't so sequestered onto iPhone tooling that they haven't had the time to plough through some of the medium and low priority bugs. We'll see I guess.

While there are probably a half-dozen 'quirks' in the 3.0 tools that I have learnt to avoid, none of the tool limitations are as annoying as the issues that remain in Interface Builder (as I'm mentioned before). As I have been spending a deal of time recently knocking up UI for my application, these issues have really been getting under my skin.

The most heinous issue IMHO (!) is the lack of any real ability to re-parent view hierarchies in the IB3. Several times, I have made the mistake of building initial/prototype UI by constructing views in a top-down manner (i.e. split view, a tree control on the left, another split view on the right, then further views beneath that), only to get badly stung by the limitations in IB3 to rearrange things. Split views, for instance, are infuriating. You can easily enough create one, by first creating the left/right views, selecting them and then doing "Embed objects in... -> Split view" (I'll refrain from commenting much on how this creation methodology is odd when you are otherwise creating hierarchy top-down). The children you 'embed' might be some table controls, for example. So initially all is well, but then you realise that you didn't want the right hand table control at the top level of the right-hand content of the split view - perhaps you need another splitter, or simply to add some peer controls with the table. You would think you could do one of the following things:
- Morph the table view (I guess the top view of that cluster - its scroll view) into a new container (box, custom view, ...)
- Insert a new view 'between' the table view and the scroll view (essentially replace the right hand content of the scroller and let the existing view there become parented to this new view)
- Temporarily pop (or cut) the table view 'out' of the right pane of the scroll view, in order to drop another view in its place, before dragging the table view back onto this as the first child

Well, in the case of split views, none of the above is possible. Once a view is glued into a Split view (on either the left or right panel), it seems that it is impossible to remove. The only solution I have found is to delete the entire split view (with its descendants) and start over. After even a moderate amount of flying around Interface Builder's inspector to set up attributes/bindings etc., this is very frustrating - and often the process of setting up the views 'just right', with positioning or correct behaviours can represent a lot of iterations and remembering all the settings to recreate them again in another instance of the same views is quite tiresome. It seems that in IB3 the creation of split views is essentially atomic, and in order to ensure flexibility I have developed the habit of always putting a custom view in the left/right slots, irrespective of what I actually think I might need to build beneath these views.

Aside from this egregious case of IB3 lacking some rather critical functionality, there are other limitations with re-parenting. There are situations where you need to restructure a subtree of views, and want to preserve some of of the existing view hierarchy, but don't have the new parent to move it to yet. You can cut and paste views, perhaps parking the view hierarchy you want to preserve at the top level of the NIB while you rearrange the new environment for that part of the UI. However, more often than not, when you come to drop the UI back into the new parent, you will find that many of the settings have reset (attributes and bindings).

Despite the foregoing, the whole NeXTstep approach to UI (NIBs, frameworks) has a great deal to be admired. IB3 is also (in general) much improved over its forebear, but clearly as a "complete rewrite", it does suffer somewhat from version-1-ism. Fingers crossed, that will be a complaint that will be short-lived and we'll see marked improvements in the next point release.

Friday, February 15, 2008

Using Core Data in NSArray controllers with a parameterised fetch predicate

Today I was wondering if it was possible to 'restart' the master->detail chain of a set of controls, by using a Core Data query (fetch predicate) in a controller, but parameterising the query with a value obtained from the selection in another, 'upstream' controller.

Normally, master->detail UIs are created by simply binding a selection from a master controller to the contents of a detail controller, and selecting some sub-component of the selection object(s) as the detail Model Key Path.

You can type any valid fetch predicate into the controller's properties panel, but in order to do what I want, you would need to be able to resolve a key from the 'upstream' object. That's the bit I don't think you can straightforwardly achieve (at the moment).

There are certainly a good many ways to actually achieve what I need - including binding the detail content to a custom controller key and having the get accessor for this key derive its results from the upstream selection content (which naturally it would need to observe in order to signal a change on its own key). The thing about how bindings work in general though is that they are so convenient and powerful, often requiring NO code in the application at all, so it's tempting to look if there's some way to contrive to get the required key to test in that filter predicate somehow, without resorting to 'external code'...

Thursday, February 14, 2008

NSValueTransformers update

Well, my experiment with NSValueTransformers for NSIndexPath morphing between two NSOutlineViews was interesting.
The plan was to have a subclass of NSValueTransformer that required to outlet connections to the two trees (to be more precise, to the NSTreeControllers), and then have instances live in the NIB file, all self contained, with the two NSTreeControllers binding to each other's selections (NSArray of NSIndexPaths) via the these transformers. The transformers' job is to reexpress the selection in terms of the local tree's outline, using the set of model (represented) objects as the underlying meaning of selection.

So, this idea is designed to work in circumstances (that I have) where certain constraints apply:
1. The trees refer to the same model objects (at least the selectable ones), though arranged differently
2. The trees are either single selection, or its OK to select ALL the occurances of nodes representing a given model object (even if this potentially widens the selection when returned to the source tree)

I created a small collection of NSValueTransformer subclasses: to handle converting to/from a represented object, to handle targeting the first matching object in the destination tree, and to handle selecting ALL matching nodes in the target tree.

When I came to try this out as intended, I met a hitch (naturally!). I had intended for the relevant instances of these transformers to live in the NIB, where they could be handily connected/bound to the local objects without requiring construction in the main body of code. As such they would be nicely encapsulated as UI behaviour. However, it turns out that there is no way to easily have two instances registering two different transformer names. NSValueTransformers work by registering a named instance of themselves with a central repository, and the registered name is then used to refer to them in binding parameters. I needed to register two instances: one with tree1 and tree2 as source and destination, and one with tree2 and tree1 as source and destination (i.e. reversed). Without creating a new Interface Builder palette object that contains a design-time property for the name an instance should register, you would have to have two *classes* to express the source and destination differences required. Even then, from a log message, it seems that the transformers need to be registered before the objects that use them are even awoken from the NIB, and that either means performing the registration in the -init method, or giving up on the idea of storing instances in the NIB altogether, and registering these objects in the main code (along with code to set the two trees into the transformer instances that are created).

So, all in all the circumstances were contriving to make this so it was barely working, with the need to register a name for the transformer instances really getting in the way of achieving the goals. This is somewhat annoying, as when you actually contrived to get the transformers initialised in time (separate classes), things worked rather well! It's clear that the real design intention for NSValueTransformers are more as 'global functions' than an application can initialise and register early, and then easily reference in binding parameters.

Anyway, I decided transformers were just a little too awkward for what I wanted, and set about thinking about an alternative.
I still wanted an object I could just drop into the NIB and connect up the relevant trees for synchronised behaviour. I decided to implement what I called a "TreeIndexPathsExchange". This is an object to which NSTreeControllers could directly bind their selection state, and from which they would receive new selection state (from changes in other controllers that were also bound). This is a outline of what I created:

- First, a set of 7 (arbitrary number) IBOutlets of type (NSTreeController *) to which tree controllers could be bound in a NIB. The idea is that up to 7 controllers could be linked to synchronise selection between them. Each outlet is labelled 'treeX' where X is 0 through 6.
- A ivar holding the set of currently selected model (represented) objects. These are common across all trees and this set represents what is truly selected in common.
- Next, an implementation of synthetic properties treeX (where X is an ordinal representing the 'port' that a particular tree is bound on). This was done by overriding -valueForKey: and -setValue:forKey: in the implementation and handling all keys that conform to the pattern "treeX" where the X is allowed to go up to the top port number (currently 6). The implementation of the getter for a treeX is to use take the set of currently selected model objects and search for these in tree connected on 'port' X. The nodes that are found to represent these objects are then converted to NSIndexPaths and the array of these is returned. The setter for a treeX property obtains all the represented objects in the tree that is providing the selection, and if these are different from the set we have cached, then we update the cache, and then notify ALL the other connected 'ports' that their value has changed. This is done by looping through from 0 to 6, checking if there's an NSTreeController attached to the outlet for the port, and if so, we do a -willChangeValueForKey: and -didChangeValueForKey: on the key "treeX".

This solution is great for something that can just live in a NIB, and is quite elegant. On testing, it worked first time with one exception - when going from a tree that only had on node representing an object, to a tree that had several nodes representing an object, I noticed that no selection was being made. The debugger seemed to demonstrate that the right things were happening, but the selection in the 'target' tree was resolutely being left with nothing selected. It occurred to me that both trees were configured for single selection, and that maybe the attempt to set multiple NSIndexPaths was causing no nodes _at all_ to be selected. I artificially restricted the number of index paths to 1 (the first matching node) an lo! everything was working again. I'm not sure why NSOutlineView rejects all selection when presented with multiple paths in its single selection mode (I would perhaps have expected it to select the first node in the list, or the one 'highest' in the tree). Maybe there's a good reason, but the current behaviour was cramping the style of this facility! What was needed was a way to determine whether a tree would accept multiple selections and, if not, pick a single NSIndexPath to send it. Unfortunately, the objects bound to the exchange are NSTreeControllers, and these have absolutely no knowledge of the selection mode set on any NSOutlineViews that might be bound to them. In order to allow the exchange to be able to make judgements such as this (whether to send a single, or multiple selections through), I added more IBOutlets, one for each treeX outlet, called viewX (X being 0-6). These are type (NSOutlineView *) and allow an appropriate outline control to be optionally connected for a port X, in complement to the required NSTreeController. In the case of finding a view connected, the exchange can query the outline view as it compiles a new selection to send to its NSTreeController, and can elect to send only one NSIndexPath if the control only supports single selection.

So, today ended with a nice behaviour object to put in my NIB(s) to synchronise outline views (in the scenarios I'm currently interested in). The result is, IMHO, very 'Cocoa-y' in that application code is decluttered of this UI behaviour detail, which is all nicely encapsulated in the relevant NIB. In any case, I got a buzz out of it :-)