Rob Baillie

Subscribe to Rob Baillie feed
More than 2 decades of writing software, and still loving it...Rob Bailliehttp://www.blogger.com/profile/06513796097645814224noreply@blogger.comBlogger151125
Updated: 14 hours 50 min ago

Lightning Web Components - Conditional formatting and iterators

Thu, 2018-12-27 07:11

One of the really elegant parts of Lightning Components was the ability to conditionally apply classes based on data.

This is something that is no longer available to us, as the expressions we are allowed to put into templates are now limited to either:

  • A property of the Javascript class (or a sub-property that can be referenced through a top level property).
  • A getter, that accepts no parameters.

I’ve already mentioned a change in this area in this blog post about building re-usable components, but I felt it was time to drill into this just a little further...

The scenario

Let’s say we want to render a list of objects. There’s a flag 'isSelected' on each of the records, and if that flag is set we want to change the rendering for that particular record.

JSON data:


[ { "id" : 1
, "name" : "Anne"
, "isSelected" : false },
{ "id" : 2
, "name" : "Bob"
, "isSelected" : true },
{ "id" : 3
, "name" : "Carla"
, "isSelected" : true }
]
Required Output:

<ul>
<li class="record">Anne</li>
<li class="selected record">Bob</li>
<li class="selected record">Carla</li>
</ul>
Lightning Component

In a Lightning Component, this would be near trivial, as we could use a ternary operator in the template to render the inclusion of the 'selected' class conditionally.


<ul>
<aura:iteration items="{!v.records}" var="thisRecord">
<li class="{!(thisRecord.isSelected?'selected':'' ) + ' record' }">{!thisRecord.name}
</aura>
</ul>

The reason this is so simple, is that we could put proper expressions into our replacements in Lightning Components, giving us fantastic flexibility in the output for each individual attribute.

Unfortunately (for this case), this isn't possible in Lightning Web Components...

Lightning Web Component

First up, let's just clarify what we mean when we say we can no longer do the string concatenation, or the ternary operator in an attribute expression, as I detailed in my earlier post.

What we mean is, we can’t do the following:


<ul>
<template for:each={records} for:item="thisRecord">
<li class="{thisRecord.isSelected?'selected':'' + ' record' }">{thisRecord.name}</li>
</template>
</ul>

All we can do is reference a single value from our data held against our Javascript object, or call a getter against it. E.g. (not that this template is of much use to us right now)


<ul>
<template for:each={records} for:item="thisRecord">
<li class={thisRecord.isSelected}>{thisRecord.name}</li>
</template>
</ul>

OK - so what other options do we have?

Option 1 - Build your class lists in your data:

So, we could build up the list of classes that we want to render against each record in our data - once the data is populated from where-ever, we can loop over the records and update the data so we end up with something like:

JSON data:

[ { "id" : 1
, "name" : "Anne"
, "isSelected" : false
, "classes" : "record" },
{ "id" : 2
, "name" : "Bob"
, "isSelected" : true
, "classes" : "selected record" },
{ "id" : 3
, "name" : "Carla"
, "isSelected" : true
, "classes" : "selected record" }
]

We can then render the required output like this:


<ul>
<template for:each={records} for:item="thisRecord">
<li key={thisRecord.id} class={thisRecord.classes}>{thisRecord.name}</li>
</template>
</ul>
Pros:
  • The template is simple.
  • Since we're processing in Javascript, we can draw on any information and make whatever complex rules we want.
Cons:
  • We need to process the data after we’ve built it. If this data comes from a @wire’d property or function, then we’re in trouble. We’d need to jump through some hoops to get the classes specified. We'll likely find using @wire is no longer an option because of that processing and we need to retrieve with an imperative call instead.
  • Data retrieved from Apex is immutable, so if this is the only thing we need to add to the data, then may find that we need to copy data into new objects, or add a new data structure in order to get the classes property added.
  • The logic for the classes that each record should have assigned is held in Javascript (or even worse, Apex).
Option 2 - Use a template 'if' and repeat the li tag.

If we want to avoid doing anything complex in our Javascript, we can add template 'if's into the markup, and conditionally render the <li> tag in its two different forms.

For example, we could do the following:


<ul>
<template for:each={records} for:item="thisRecord">
<template if:true={thisRecord.isSelected}>
<li key={thisRecord.id} class="selected record">{thisRecord.name}</li>
</template>
<template if:false={thisRecord.isSelected}>
<li key={thisRecord.id} class="record">{thisRecord.name}</li>
</template>
</template>
</ul>
Pros:
  • The Javascript doesn't contain any of the logic for the conditional rendering.
Cons:
  • We're breaking the "Don't repeat yourself" (DRY) principle, and repeating the structure of the <li> tag in each side of the IF condition. In this simple case this may not seem like a big deal, but still - any change to that rendering now needs to be made in 2 places, instead of 1. And let's be honest, how often is the case this simple? We'll probably find that we have to copy a LOT of the template to work like this.
Option 3 - Use a template 'if', and change our CSS.

Another alternative is to use the template if, but to isolate the part that changes from the part that doesn't. That is, we introduce HTML inside our 'if:true' that only exists in order to apply the formatting that should be assigned to the 'isSelected' records.

That is, we do the following in our template, to introduce a new, standalone, div that has the 'selected' class applied, and then wrap the content of the <li> in another div.


<ul>
<template for:each={records} for:item="thisRecord">
<li key={thisRecord.id} class="record">
<template if:true={thisRecord.isSelected}>
<div class="selected"></div>
</template>
<div>{thisRecord.name}</div>
</li>
</ul>

Having done this, we can use more advanced CSS selectors to apply our 'selected' style to the div that follows the div with 'selected' as its class.

For example, let's say our 'selected' records should have a green border:


.selected+div {
border: 1px solid green;
}

The selector '.selected+div' means 'The div that follows the tag with the class 'selected'.

You can read about CSS Selectors here.

Pros:
  • We conditionally render only the addition of the class in the template - nothing is repeated.
  • The Javascript doesn't contain any of the logic for the conditional rendering.
Cons:
  • We need to introduce additional structure into the HTML that exists purely to apply classes to other elements. This isn't ideal and can change the behaviour of other classes further down the structure (e.g. we have introduced a span here - what impact does that have?)
Option 4 - Introduce a sub component.

It would be good if we could call a getter function at run time in order to get the list of classes, along the lines of the component described in the earlier post.

The problem is that we can't call a function and pass it the context of the individual record that we are rendering.

So does that mean we can't call a function?

No, it just means that we need to narrow the context of the component down into each record before we call the function - and we can do that with a sub-component that just renders the <li>.

We can call our sub-component (recordRenderer) with something like this:


<ul>
<template for:each={records} for:item="thisRecord">
<c-record-renderer key={thisRecord.id} record={thisRecord}></c-record-renderer>
<template>
</ul>

Our sub-component template can be:


<li class={classes}>{record.name}</li>

And our sub-component javascript can be:


import { LightningElement, api } from 'lwc';

export default class RecordRenderer extends LightningElement {

@api record;

get classes() {
if ( this.record.isSelected ) {
return 'selected record';
}
return 'record'
}
}
Pros:
  • Both the template for the parent and sub component are very simple and focused on small aspects of the rendering - no IFs in the templates is a good thing.
  • It's very possible that there will be more complex behaviour required in the future, and having the context set to be the individual record could make that behaviour much simpler to implement in the future. For example, passing data into events due to a 'click' event could be well served if we have that distinction between the parent and child components and context.
Cons:
  • OK, we have the classes logic inside the Javascript, much like in Option 1, but we don't have to process the data - the getter is called at render time. And, presumably the code in the renderer is near trivial, so maybe that's not such a big problem.
Conclusions

The limitations of the expressions allowed in templates makes for a less elegant solution to this kind of problem

I would find it hard to argue that either Option 1 or 2 or worth the pain - neither fiddling with the data model or repeating large chunks of the template feel like good practice to me.

Options 3 and 4 both have their merits - Option 3 has very few drawbacks other than the introduction of the almost redundant HTML elements - but if you can fulfil your needs in this way, well the CSS selectors are there for a reason, so why not?

Personally though, I'd almost always favour Option 4 - decomposing into smaller components.

Honestly, I think that pushing us down this route is a good thing - smaller components are simpler to understand just as smaller classes and methods are, and with the standardised event model that Lightning Web Components use, this approach can make for very simple, small, and potentially re-usable components that makes each individual component present themselves in much simpler ways.

It may require us to think about our components in a different way to how we approached Lightning Components, but I firmly believe that we will end up with cleaner, and more easily understandable and maintainable building blocks.

Lightning Web Components - Events, and listening to your children

Thu, 2018-12-20 09:13

Another fantastic inclusion in Lightning Web Components is the completely reworked events model.

De-composing functionality and building smaller, and more generic building blocks has become much simpler and much more intuitive.

In the world of Lightning Components I never got on with events. The idea of adding a Salesforce configuration for an event, registering events on the dispatcher template, and then registering listeners on the receiving template seemed really cumbersome. And then added onto that was the differences in syntax between component and application events. They just felt really unnatural.

In Lightning Web Components all this has become significantly simpler, and much more in-keeping with the standard HTML / Javascript model.

We've already seen how we can use @api allow state to be passed into our components. Now we're talking about notifying our parents when events occur.

I could go into deep detail on how this is done, but the documentation on this area is spot on, and there's no need to repeat it - follow the guide in the docs and you can't go far wrong. It's particularly well written and introduces the concept brilliantly.

That said, there has to be something to say, right?

Well, yes, and before I go into some of the less obvious limitations, let's just present a simple example:

  • In the child component, we create and dispatch an event.
  • When you include the child component, specify the handler for the event

Something along the lines of:

Child component's Javascript

import { LightningElement, track } from 'lwc';

export default class ChildComponent extends LightningElement {

@track value;

// Called from the onchange handler on an input
handleValueChanged( event ) {
this.value = event.target.value;
this.dispatchEvent( new CustomEvent( 'valuechanged', { detail: this.value } ) );
}
}
Parent component's template

<c-child-component onvaluechanged={handleOnValueChanged}>
Parent component's Javascript

import { LightningElement, track } from 'lwc';

export default class ParentComponent extends LightningElement {

@track updatedValue;

handleOnValueChanged( event ) {
this.updatedValue = event.detail;
}
}

OK. So how simple is that? No Salesforce configuration to create, nice simple syntax, event handlers defined in the template, exactly the same way you would if it was a standard HTML tag

Without wanting to repeat the documentation from Salesforce, it's worth calling out a few important points:

  • dispatchEvent and CustomEvent are standard Javascript.
  • When you include the child component, you specify the handler for the event in the template.
    • The event should not start with 'on', and the attribute you assign the handler to will have 'on' appended to the start.
    • The fact we can specify the handler as 'onvaluechanged' when we create the tag is LWC specific, and for very good reason (explained later). You cannot do this with standard Web Components.
  • We can pass data from the child component in the event, by passing an object as the second parameter.
    • Note that the data can only be in the 'detail' property. If you add data to any other property you may accidentally overwrite a standard property, and if you don't use another standard property it won't be visible in the event anyway - you put data into 'detail', and that's all you have. Live with it.
    • You can pass an object, but if you do you should construct it there and then. But you probably shouldn't.

OK, that's all well and good - but where are the limitations?

Well, the main one I've found was a real surprise to me - to the point that I'm worried that I've misunderstood something.

In the standard Javascript events model - all events propagate to all levels.

For example, if I have the following HTML:


<div id="grandparent" onchange="handleChange();">
<div id="parent">
<div id="child">
<input onchange="handleChange();"/>
</div>
</div>
</div>

When the value of the input changes, the onchange event is handled by both the onchange handler on the input and the 'grandparent' div. Events propagate through the whole DOM, unless a handler stops it by calling 'stopPropogation' against the event.

It's generally recognised that this is a good thing, and that events should not be stopped unless there's very good reason.

However, as far as I can see, this is not true when you cross boundaries between LWCs.

For example, if I had the above example for a child component, and included it in a parent as such:

Parent component's template

<c-child-component onvaluechanged={handleOnValueChanged}>

And then included that in the grandparent as such:

Grandparent component's template

<c-parent-component onvaluechanged={handleOnValueChanged}>

Assuming that the parent component does not raise a 'valuechanged' event of its own, the 'onvaluechanged' handler on the grandparent component will never get called.

It seems that you can only handle a component's event in its parent's scope.

Note: these are actually slightly different scenarios I'm explaining, but I think it's worthwhile in order to illustrate the point. Also, there is a 'bubbles' property that you can set on the CustomEvent when you create it, although I didn't see a change in behaviour when I did that.

As I've said, I'm surprised by this behaviour, so am happy to be told I'm wrong, and learn where my mistake is.

Adding an event handler via Javascript

So what of the 'on' behaviour? Why is this such a cool addition?

Well, that's best explained by illustrating what we would need to do if this wasn't available to us.

Let's go back to our child component

Child component's Javascript

import { LightningElement, track } from 'lwc';

export default class ChildComponent extends LightningElement {

@track value;

// Called from the onchange handler on an input
handleValueChanged( event ) {
this.value = event.target.value;
this.dispatchEvent( new CustomEvent( 'valuechanged', { detail: this.value } ) );
}
}

It dispatches a 'valuechanged' event that we can handle in a parent component.

We include the child component with a simple node: Parent component's template


<c-child-component></c-child-component>

Note we are no longer setting onvaluechanged because, in our hypothetical scenario, this is not possible.

Now, in order to handle the event we need to attach a handler to the component in our parent component's Javascript

First we need to find it, so we set a property on the component that we can use to retrieve it. You may default to setting an 'id', but it turns out that Salesforce will adjust the ids on nodes, so we can't rely on that. Instead, we decide to set a class:

Parent component's template

<c-child-component class="child"></c-child-component>

Now, the parent component's Javascript. We need to hook into one of the lifecycle callbacks in order to attach our handler

You can see the docs for those functions here.

From there we find:

  • We can't use the constructor, as the component hasn't been added to the DOM yet.
  • We can't use the connectedCallback, as the component's children haven't been rendered yet.
  • We can use the renderedCallback, but this gets called multiple times - whenever any reactive properties change, so we need to protect against multiple adds.

So, maybe we can do this:


allocatedEventListeners = false;

renderedCallback() {
if ( ! this.allocatedEventListeners ) {
this.template.querySelector('.child').addEventListener( this.handleOnValueChanged ).bind() );
this.allocatedEventListeners = true;
}
}

That is a bit clunky, but it looks like it should work. We 'bind' the 'handleOnValueChanged' function to the event listener.

Unfortunately, it doesn't. Because of a fundamental capability of Javascript - it appears that the event handler doesn’t have access to ‘this’. And if you’re not an experienced Javascript developer then that’s when things start to get a bit crazy (actually, even if you ARE an experienced Javascript developer, I suspect it STILL gets a little messed up).

Basically, 'this' isn’t guaranteed to be what you think it is. If you write code that behaves in a procedural way, then it will generally be the object in which the method is defined. But as soon as you add in callbacks, Promises and asynchronous behaviour, it isn't guaranteed to be.

'this' can be simply the context in which the function runs, rather than the object or class in which the function is defined. This is an incredibly powerful aspect of Javascript that is very difficult to get to grips with unless you’re used to seeing it.

In Lightning Components you can see the effect of this in code such as Apex callouts in helpers where you end up with:


let self = this;

In our particular case, you could use an alternative - the fat arrow notation for defining functions.


event => { this.handleOnValueChanged( event ) }

Which is *would* transpile to (or is synonymous with) this:


function handleEvent(event) {
var _this = this;
( function (event) { _this.handleOnValueChanged(event); });
}

Look familiar?

The resulting code for adding the event handler could end up like this:


allocatedEventListeners = false;

renderedCallback() {
if ( ! this.allocatedEventListeners ) {
this.template.querySelector('.child')
.addEventListener( 'valuechanged',
( ( event ) => { this.handleOnValueChanged( event ) } ).bind() );
this.allocatedEventListeners = true;
}
}

In the end, this would work. But no-one would suggest it was elegant. And in order to get it working we had to brush up against some advanced behaviour of 'this'. Now, I admit that people are going to have to learn how 'this' and its binding behaves in order to write reliable Lightning Web Components - but just to add an event handler?

The reality is that we don't have to think about it - Salesforce have given us a very usable shorthand for it, and we should be extremely grateful for it!

Lightning Web Components - Unit Testing immediate impression.

Wed, 2018-12-19 12:37

One of the things I’m most excited about in Lightning Web Components is the ability to write Unit Tests for the components.

The unit testing framework of choice is Jest, and it looks well suited. Not least it’s the framework of choice for Facebook, and describes itself as well suited to React apps. Why should that matter? Well, React is a 1-way bound Javascript framework - and so is LWC.

So I was looking forward to get into Unit Testing, following the documentation for testing wired components

Unfortunately, the documentation didn’t work out for me, and it looked like there’s a couple of mistakes in there:

The example ‘getRecord.json’ file isn’t valid JSON.

In order for the file to work, the field names need to be surrounded in double quotes.

I.E. Instead of:


// getRecord.json
{
fields: {
Name: {
value: "DYNAMO X1"
}
}
}

The file should be:


// getRecord.json
{
"fields": {
"Name": {
"value": "DYNAMO X1"
}
}
}

Interrogating the ‘element’ for its state does not seem to work.

Instead, I found that I needed to get data from the document object.

I.E. The following does not work:


// Resolve a promise to wait for a rerender of the new content.
return Promise.resolve().then(() => {
const content = element.querySelector('.content');
expect(content.textContent).toBe("Name:DYNAMO X1");
});

But the following does:


// Resolve a promise to wait for a rerender of the new content.
return Promise.resolve().then(() => {
const content = document.body.querySelector('.content');
expect(content.textContent).toBe("Name:DYNAMO X1");
});

Mocking doesn't seem to work for Apex, only LDS

From the quick test I did, I could get the mocking framework to work for the Lightning Data Service, once my implementation of the example was tweaked. However, I couldn't get it to work with an imported Apex method

I didn't see a reference to this being missing, though I guess I may have just missed that, and I know that the recommendation is to use LDS whenever you can. I just worry that there's a really important use case out there - it seems natural to me that components that use custom Apex are likely to be more complex than ones that use LDS. And with that in mind, it feels like missing Apex will be a big loss to the testing framework.

Hopefully the last part is already known about, is a bit of missing documentation, or is simply that I misunderstood something.

Whatever the case, I plan on doing more investigations into the Unit Testing capabilities, and will obviously blog my findings - but I have to admit that I found the initial experience a little disappointing after the extremely polished experience up to now.

I sincerely hope that it isn’t an indicator that Unit Testing is bit of an after-thought.

Lightning Web Components - @api, slots and getters

Wed, 2018-12-19 05:49

I've blogged about a few of the behaviours of Lightning Web Components, but the proof is really in building useful bits. What happens when you actually try to make a re-usable component?

For our example, we'll rebuild 'ui:message'. A now (seemingly) defunct base component that would render a message in a box that is coloured based on the 'severity' of the message being shown. In the original it could be set to 'closable', although we're going to ignore that and focus on just the rendering of it.

In a Lightning component we would use it like this:

Original usage - Lightning Component

<ui:message title="Error" severity="error" >{!v.errorMessages}</ui:message>

Ideally, the version we will create, would be used like this:

Desired usage

<c-message title="Error" severity="error" >{errorMessages}</c-message>

Looks pretty straightforward, and actually - it is. Just as long as we know about a few simple concepts.

Before we go into them, let's see what a working example could look like:

Javascript component

import { LightningElement, api } from 'lwc';

export default class Message extends LightningElement {

@api title;
@api severity;

get classes() {
return this.severity + ' uiMessage';
}
}
HTML Template

<template>
<div class={classes} role="alert" >
<div class="uiBlock" >
<div class="bBody" >
<h4>{title}</h4><slot></slot>
</div>
</div>
</div>
</template>

OK then, let's pick a few of these bits apart, and hopefully we'll explain a few little behaviours along the way.

First up, let's take a look at the '@api' declarations.

@api

The @api property lines are pretty simple to understand - they define that 'title' and 'severity' are publicly available properties of the component. In the context of Lightning Web Components, public and private mean 'available outside of the component, and invisible to the outside of the component'. It's tempting to think that this is to do with the scope of the Javascript, but it's not.

That is, every property of the Javascript component is available to be referenced in the HTML template - whether it is 'public' or 'private'. One way of thinking about it is that the HTML template forms part of the component, and so it can see private properties.

Another (probably more accurate) way of thinking about it is that the template is processed by the Javascript component (that code it's immediately obvious, but it's almost certainly in LightningComponent - which this class extends), and the Javascript can see its own properties, so the private ones are available.

However, other components (like ones that include this in their templates) can only see public properties. @api is how you make them public. Doing so means that they are available as attributes on the tag you use to include the component (hence <c-message title="Error"... is possible)

Not only that, but every @api decorated property is also 'reactive'. That is, whenever its value changes the component is re-rendered. The documentation is pretty clear on that point - and is presented as a fundamental property of a public property:

Public Properties

To expose a public property, decorate it with @api. Public properties define the API for a component. An owner component that uses the component in its markup can access the component’s public properties. Public properties are reactive. If the value of a reactive property changes, the component’s template rerenders any content that references the property.


Why would a public property be reactive?

Put simply, if we change the value of one of those properties in a parent component, we want the component to re-render - and it's pretty much guaranteed that we ALWAYS want the component to re-render.

For example, we may do the following:


<c-message title="{title}" severity="{severity}" >{messages}</c-message>

When the value of 'title' or 'severity' changes, we would always want the message box to re-render to show our new values. And so the framework takes care of that and makes EVERY public property of the component reactive

So that takes care of the attributes we need to pass in, what about the content?

Slots

Lightning Components had facets. And they weren't intuitive. I mean they weren't complex, but they weren't in keeping with HTML - they always felt unnatural - especially in the simplest of cases.

Lightning Web Components fixes that, with slots. And in the simple case they are trivial. The documentation isn't long, and doesn't need to be.

All we need to do, in this simple case, is add <slot></slot> into our component, and the body of any tag that instantiates the component will be rendered in that slot.

Now something that's missing from the documentation, which is a fairly obvious behaviour once you see it in action, is that slots are effectively reactive.

That is, if you change the content of the tag, that content is immediately reflected in the component's rendered output.

So, in our example:


<c-message title="Error" severity="error" >{errorMessages}</c-message>

Whenever the value of 'errorMessages' changes, the slot inside the 'message' component is re-rendered to include the new content.

I admit, I had assumed that this would be the case, but I didn't immediately realise that it was an assumption. So I thought it was worth calling out

Getters

The final part of the example that I want to explain is the use of the 'getter':


get classes() {
return this.severity + ' uiMessage';
}

What we're doing here is building a list of CSS classes for a node in the component that includes one of the passed in attributes plus a standard class that must be applied

The use of the getter illustrates an important difference between the behaviour of the templates in Lightning Components (LC) and Lightning Web Components (LWC), as well a reminder of the behaviour of properties.

That is, in LC we could have done the following in our template:


<div class="{!v.severity + ' uiMessage'}" role="alert" >

In LC, our replacements could include expressions, so we could build up strings in the template. In LWC, we can't do this, we can only reference properties or getters.

Not only that, but we can't build up the strings in the attribute assignment.

I.E. We can't do this:


<div class="{severity} uiMessage" role="alert" >

In LWC we don't assign properties to attributes in this way, the framework takes care of the wrapping in double quotes, escaping the strings, and other such things, so we can only assign the property, and that's it.

I.E. This is what is allowed:


<div class={severity} role="alert" >

So, if we want to assign more than just the value of 'severity' to the class attribute, we need to build that string up outside of the template.

Your first reaction might be - OK, we can create a trackable property to store it, right?


@track classes = this.severity + ' uiMessage';

But this doesn't work. You'll end up with the classes property defined as 'undefined uiMessage', and it won't change. Why is that?

Well, it's tempting to think that 'track' and 'api' mean that Javascript will re-run when things change, but that's not what they do - nor what the documentation says they'll do

Rather, if a property is reactive it means that the component will be re-rendered when the property changes. That says nothing about running Javascript.

So when we look at the above, what happens is the property 'classes' is set when the Javascript object is constructed. At this point the property 'severity' is undefined. When the 'severity' is updated via the attribute, the component is re-rendered and the 'classes' property is re-injected into the template, but the Javascript that sets the classes property is not re-run - that is only executed when the object is instantiated.

So, instead of setting the 'classes' property directly, we set up a getter for it:

Javascript component

get classes() {
return this.severity + ' uiMessage';
}

Now, when the 'severity' property changes, the 'classes' property is re-injected. In order to get the value for 'classes', the getter is executed - this is the only way the property can be retrieved. In doing so, the string concatenation is re-evaluated and the new value is retrieved.

Summary

None of the concepts here are particularly difficult, or really that earth shattering, but building even the simplest of re-usable components starts to shed some light on what the parts do any why.

The framework has been very thoughtfully put together, and some of the techniques will be different to what people are used to, having cut their Javascript teeth with Lightning Components, but the changes are for very good reasons. An example like this really shows how those changes make for simple components.

Lightning Web Components - The subtleties of Tracking and Wiring

Tue, 2018-12-18 06:31

Following on from yesterday's investigations into the behaviour of '@track', and its effect on untracked properties, I figured I should expand my remit to '@wire'.

My main reason for that was the statement in the documentation:

In the wire adapter’s configuration object, prefix a property with $ to reference a property of the component instance. The $ prefix tells the wire service to treat it as a property of the class and evaluate it as this.propertyName. The property is reactive. If the property’s value changes, new data is provisioned and the component rerenders.

This comment relates to code along the lines of:


@api recordId;

@wire(getRecord, { recordId: '$recordId', fields })
contact;

The bit that really piqued my interest was 'The property is reactive'.

In all the examples, it looked like the property being referenced was always tracked or set as an api field (the latter implying the former)

That's the case in the above example - the property passed as a parameter 'recordId' as referenced by '$recordId' is defined as an api field, and is therefore tracked.

There's also that recurring point 'the component rerenders', which we saw in my previous post. Recalling that, it essentially meant that a change to a tracked property caused all untracked properties in the same component (and I am assured, but yet to prove, child components) to be re-rendered

So, what is the actual behaviour in this case? Are the implications the same?

You can code that illustrates the below examples, as well as the points from yesterday's post in this bitbucket repository if you want to explore the behaviours further. I suggest you take a look.

Tracked property used as a parameter to a wired property

The natural first example is much like the one exampled above. We have an Apex method that takes a parameter. We wire that parameter to a javascript property, and reference a single parameter.

Javascript component

@track searchString;

@wire(searchContacts, { searchString: '$searchString' })
contacts;

When the searchString property is updated:

  • The searchString's change causes the wired function to be re-evaluated.
  • Therefore the value of 'contacts' changes.
  • The component treats contacts as if it was tracked and causes the component to be re-rendered.

OK. That's pretty decent, and what the documentation says will happen. No surprises there!

But then I got to thinking: So, what happens if the property isn't tracked?

My first guess was that maybe the Apex method wouldn't get re-executed, but I wasn't entirely convinced - after all, the documentation only really makes a big deal about rendering. So I changed the code and tried again...

Javascript component

searchString;

@wire(searchContacts, { searchString: '$searchString' })
contacts;

This time, when the searchString property is updated:

  • The searchString's change causes the wired function to be re-evaluated.
  • Therefore the value of 'contacts' changes.
  • The component treats contacts as if it was tracked and causes the component to be re-rendered.

Erm. OK. It's the same. Hmmm....

Looking at what's actually going on, that does make sense. It's the '$searchString' reference that tells the framework that searchString is reactive, as respect to the wiring, so it's that causing the Apex to re-execute. And once the method is re-evaluated, the value of 'contacts' changes, and that causes the component to re-render.

That go me to thinking - so what does the '@track' do against the searchString in the original example. Experience tells me that these things generally will have an effect.

So I added something else to my example...


renderedCallback() {
console.log( 'renderedCallback was called' );
}

This hooks into the component's lifecycle, as described here, and will tell us when the component gets re-rendered.

It turns out that my initial understanding of the first example was slightly wrong, though not in a way that would generally have much of an impact.

That is, if the parameter is tracked, you end up with the following:

  • The searchString's change causes:
    • The component to be re-rendered (since it is tracked).
    • The wired function to be re-evaluated (since it is referenced as a '$' parameter).
  • The execution of the wired function causes the value of 'contacts' to change.
  • The component treats contacts as if it was tracked and causes the component to be re-rendered.

The result is that the component is re-rendered twice!

And sure enough, if you take the tracking off the searchString parameter, the component is only re-rendered once.

So, this does reinforce another point that the documentation makes here:

Don’t overuse @track. Track a property only if you need the component to rerender when the property’s value changes.

Personally, I think I'd call out this particular behaviour and remind people - you don't need to 'track' a property in order for the wired method to re-execute - but maybe it's just my lack of attention that missed that little point.

And I'd also state that you should only track a property that is being used as a reactive wired parameter if you need the property to be re-rendered before the Apex you are calling returns with its result.

Lightning Web Components - the art of Tracking

Mon, 2018-12-17 08:15

On Friday I wrote an excited blog post on the introduction of Lightning Web Components.

One of the things I mentioned was that I didn't quite get how '@track' was supposed to work

Since then, I've found the documentation (add /docs/component-library/documentation/lwc/lwc.get_started_introduction to the end of your Salesforce URL), read it, got confused by my memory, wrote a test-rig and looked at it all in depth.

On the surface, it's simple, but there are some complications, which were the cause of my confusion...

In the documentation it says the following:

Both @track and @api mark a property as reactive. If the property’s value changes, the component re-renders.

OK, nice and simple. So what does that mean?

A single untracked property

Testing with a simple component that contains a single property that is not tracked I found the following behaviour:

Javascript component

export default class TrackExperiments extends LightningElement {
primitiveUntracked;

handlePrimitiveUntrackChanged( event ) {
this.primitiveUntracked = event.target.value;
}
}
Template

<p>The value is: {primitiveUntracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveUntracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text"
value={primitiveUntracked}
onchange={handlePrimitiveUntrackChanged}></lightning-input>
  • Changing the value defined in either of the inputs will not result in the rendered value changing.
  • Changing the value defined in the input without the onchange handler will not change the Javascript property.
  • Changing the value defined in the input that does have an onchange handler will update the Javascript property.

So, fairly straightforward, and maybe in-line with what you would expect:

  • Untracked properties are not re-rendered in the UI when they change
  • Untracked properties that are bound to inputs are not automatically updated when the input changes unless there is an onchange handler defined that updates the Javascript property
A single untracked property

Testing with a simple component that contains a single property that is tracked, I found the following behaviour:

Javascript component

export default class TrackExperiments extends LightningElement {
@track primitiveTracked;

handlePrimitiveTrackChanged( event ) {
this.primitiveTracked = event.target.value;
}
}
Template

<p>The value is: {primitiveTracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveTracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text" value={primitiveTracked}
onchange={handlePrimitiveTrackChanged}></lightning-input>
  • Whenever the value of the Javascript property changes, it is reflected on the outputted page.
  • Changing the value defined in the input without the onchange handler will not change the Javascript property, and so the rendered property does not change.
  • Changing the value defined in the input that does have an onchange handler will update the Javascript property, and both the rendered property and the value in the other input is updated.

OK, in my opinion, slightly odd - I was expecting the input binding to be two-way - that's not the case:

Lightning Web Components contain 1-way binding.

Still, it may have surprised me, but it's easy to remember, and no big deal once you know that's the behaviour.

  • Tracked properties are re-rendered in the UI when they change.
  • Tracked properties that are bound to inputs are not automatically updated when the input changes unless there is an onchange handler defined that updates the Javascript property.
  • Inputs that are bound to tracked properties are automatically updated when the underlying property is updated.
A tracked property and an untracked property on the same page Javascript component

export default class TrackExperiments extends LightningElement {
@track primitiveTracked;
primitiveUntracked;

handlePrimitiveTrackChanged( event ) {
this.primitiveTracked = event.target.value;
}

handlePrimitiveUntrackChanged( event ) {
this.primitiveUntracked = event.target.value;
}
}
Template

<p>The value is: {primitiveTracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveTracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text" value={primitiveTracked}
onchange={handlePrimitiveTrackChanged}></lightning-input>

<p>The value is: {primitiveUntracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveUntracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text"
value={primitiveUntracked}
onchange={handlePrimitiveUntrackChanged}></lightning-input>

Now things start to get a little surprising.

  • Changing the inputs related to the tracked property works as described above, as if it was the only property on page.
  • Then, changing the inputs related to the untracked property as described above.
  • However, changing the tracked property inputs after changing the untracked input, causes both the tracked and untracked properties to be re-rendered.

The last point really took me by surprise - I was expecting that only the tracked property would be updated. This led me to go back to the documentation again and re-read it.

That is (bold added for emphasis):

Both @track and @api mark a property as reactive. If the property’s value changes, the component rerenders.

Yep, I think I'm reading that right, and it looks like it's reflected in the behaviour I see (without delving too deep into it). When you change the value of a tracked property, the whole component re-renders. This means that any untracked properties will also be re-rendered.

I've put together a more complete illustration of that, which you can find in this git repo.

Lightning Web Components - the dawn of (another) new era

Fri, 2018-12-14 08:04

Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’.

The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com.

From that we can then get to a series of examples here.

(Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above)

Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression?

This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So here it is - mistakes and all.

WOW. Salesforce UI development has grown up.

Salesforce aren’t lying when they’ve said that they’re trying to bring the development toolset up to the modern standards.

We get imports, what look like annotations and decorators, and there’s even mention of Promises. Maybe there’s some legs in this…

It’s easy to dismiss this as ‘Oh no, yet another change’, but the thing is - the rest of industry develops and improves its toolset - why shouldn’t Salesforce?

The only way to keep the product on point IS to develop the frameworks, replace the technology, upgrade, move on. If you don’t do that then the whole Salesforce Ecosystem starts to stagnate.

Or to put it another way - in every other part of the developer community, learning from what was built yesterday and evolving is seen as a necessity. It’s good to see Salesforce trying to keep up.

So what are the big things that I’ve spotted immediately?

import is supported, and that makes things clearer

Import is a massive addition to Javascript that natively allows us to define the relationships between javascript files within javascript, rather than at the HTML level.

Essentially, this replaces the use of most ‘script’ tags in traditional Javascript development.

For Lightning Web Components,we use this to bring in capabilities from the framework, as well as static resources.

E.g. Importing modules from the Lightning Web Components framework:


import { LightningElement, track } from 'lwc';

Importing from Static Resources:


import { loadScript } from 'lightning/platformResourceLoader’;
import chartjs from '@salesforce/resourceUrl/chart';

What this has allowed Salesforce to do is to split up the framework into smaller components. If you don’t need to access Apex from your web component, then you don’t need to import the part of the framework that enables that capability.

This *should* make individual components much more lightweight and targeted - only including the capabilities that are required, when they are required.

Getting data on screen is simpler

Any javascript property is visible to the HTML template.

E.g.


export default class WebAppComponentByMe extends LightningElement {
contacts;

We can then render this property in the HTML with {contacts} (none of those attributes to define and none of those pesky v dot things to forget).

Much neater, much more concise.

We track properties

Looking at the examples, my assumption was that if we want to perform actions when a property is changed, we mark the property trackable using the @track decorator.

For example:


export default class WebAppComponentByMe extends LightningElement {
@track contacts;

I was thinking that, at this point, anything that references this property (on page, or in Javascript) will be notified whenever that property changes.

However, at this point I can't really tell what the difference is between tracked and non-tracked properties - a mystery for another day

Wiring up to Apex is much simpler

One of the big criticisms of Lightning Components that I always had was the amount of code you need to write in order to call an Apex method. OK, so you have force:recordData for a lot of situations, but there are many times when only an Apex method will do.

In Web Components, this is much simpler.

In order to connect to Apex, we import the ‘wire’ module, and then import functions into our javascript


import { LightningElement, wire } from 'lwc';
import getContactList from '@salesforce/apex/ContactController.getContactList';

The first line imports the wire capabilities from the framework, the second then imports the Apex method as a javascript method, therefore making it available to the component.

We can then connect a javascript property up to the method using the wire decorator:


@wire(getContactList) contacts;

Or wire up a javascript method:


@wire(getContactList)
wiredContacts({ error, data }) {
if (data) {
this.contacts = data;
} else if (error) {
this.error = error;
}
}

When the component is initialised, the getContactList method will be executed.

If the method has parameters, that’s also very simple (E.g. wiring to a property):


@wire(getContactList, { searchKey: '$searchKey' })
contacts;

Changing the value of a property causes Apex to re-execute

Having wired up a property as a parameter to an Apex bound Javascript function, any changes to that property will cause the function to be re-executed

For example, if we:


searchKey = '';

@wire(findContacts, { searchKey: '$searchKey' })
contacts;

Whenever the searchKey property changes, the Apex method imported as ‘findContacts’ will be executed and the contacts property is updated.

Thankfully, we can control when that property changes, as it looks like changing the property in the UI does not automatically fire a change the property on the Javascript object. In order to do that, we need to change the property directly.

E.g. Let’s say we extend the previous example and there’s an input that is bound to the property, and there’s an onchange event defined:



And the handler does the following:


handleKeyChange(event) {
this.searchKey = event.target.value;
}

This will cause the findContacts method to fire whenever the value in the input is changed.

Note that it is the assignment to this.searchKey that causes the event to fire - it looks like the binding from the HTML is 1-way. I admit that I need to investigate this further.

Events do not require configuration to be implemented

Events work in a completely different way - but then that’s not a problem - Application and Component events were different enough to cause headaches previously. The model is actually much simpler.

The example in the above referenced repository to look at is ‘PubSub’.

It’s much too involved to into detail here, but the result is that you need to:

  • Implement a Component that acts as the messenger (implementing registerListener, unregisterListener and fireEvent)
  • Any component that wants to fire an event, or listen for an event will import that component to do so, firing events or registering listeners.

This would seem to imply that (at least a certain amount of) state within components is shared - looking like those defined with 'const'

Whatever the precise nature of the implementation, a pure Javascript solution is surely one that anyone involved in OO development will welcome.

I suspect that, in a later release, this will become a standard component.

Summary

Some people will be thinking "Man, glad I didn’t migrate from Classic / Visualforce to Lightning Experience / Components - maybe I’ll just wait a little longer for it all to settle down”.

You’re wrong - it won’t settle, it’ll continue to evolve and the technologies will be continually replaced by new ones. Eventually, the jump from what you have to where you need to get to will be so huge that you’ll find it incredibly hard. There’s a reason why Salesforce pushes out 3 releases a year, whether you want it or not, these technology jumps are just the same. The more you put it off, the more painful it’ll be.

The change from Lightning Components to Lightning Web Components is vast - a lot more than a single 3 letter word would have you suspect. The only real similarities between the two frameworks that I’ve seen up to now are:

  • Curlies are used to bind things
  • The Base Lightning Components are the same
  • You need to know Javascript

Other than that, they’re a world apart.

Also, I couldn’t find any real documentation - only examples - although those examples are a pretty comprehensive starting point.

Now, obviously it's early days - we're in pre-release right now, but what I've seen gives me great hope for the framework, it's a significant step forward and I can't wait to see what happens next. I wonder if a Unit Testing framework might follow (I can but hope)

You could wait, but hey, really, what are you waiting for? Come on, jump in. The change is exciting...

LinkedIn, and the GDPR age

Wed, 2018-11-28 13:34
I should start this post by saying I’m neither a lawyer, nor a GDPR expert.  Possibly both of those facts will become massively apparent in the text that follows.

Also, I’m not a LinkedIn Premium user - so it’s possible I’m missing something obvious by not having access to it.

But anyway, I’ve been thinking about how LinkedIn fits into a GDPR world, and it doesn’t something doesn’t seem quite right to me at the moment.

LinkedIn are in the data business, and they’re very good at protecting that asset.  They tend to be (quite rightly) pro-active in stopping people from extracting data from their systems and pushing it into their own systems.

As such, businesses (recruiters particularly) are encouraged to contact directly within LinkedIn, and they are offered tools to discover people and commence that communication.

Unfortunately, this lack of syncing between LinkedIn and in-house systems can cause a big problem with GDPR.

That is:
What happens if someone says to a recruitment organisation - “Please forget me, and do not contact me again”

In this situation, the organisation is obliged to ‘remove' them from their systems.

At some point in the future another recruiter from the same organisation then finds the person on LinkedIn, without reference to their own systems and messages them using LinkedIn.

What happens next?

By the letter of the law, the organisation may not have done anything wrong.
  • The person is no longer in the organisation’s system, they were found on LinkedIn.
  • The person was not sent an e-mail, or phoned, they were messaged within LinkedIn.
  • The person has consented to have their data held by LinkedIn for the expressed purpose of being contacted by potential recruiters via the platform.

With all this in mind, it may be interpreted that it’s fair game to contact anyone on LinkedIn, regardless of their expressed desire not to be contacted by a particular company.

However, whilst this may be within the definition of the law, it’s pretty clear it’s not in the spirit of the law.

Note - Again I’m not a GDPR expert, nor a lawyer, so can't say for certain that it IS within the definition of the law - nor am I asserting that it is - just that I can imagine that it might be interpreted that way by some people.

And this is where things get complicated for LinkedIn.  I can see a few outcomes of this, but two of them could be extremely worrying for the future on LinkedIn.

Scenario - LinkedIn Premium is seen as an extension of a subscribing organisation’s IT systems.

It could be argued that, whilst LinkedIn is in independent entity, when they provide services to another organisation, their systems then become part of the remit of that subscribing organisation.

I.E. within LinkedIn, any action by a user and the storage of data of that action falls solely within the responsibility of the employer of the user that performs that action.  LinkedIn are not responsible for the use of the data in any way.

On first glance, this looks ideal to LinkedIn - no responsibility!

However, that’s not true - if there’s ever a test case that proves this point, then suddenly LinkedIn becomes a big risk to any organisation that uses it.

Over the course of the last 2 years or so, every data holding organisation in the EU has looked carefully at their data retention and use policies and systems and done what they can to protect themselves - in may cases I’m sure they have changed suppliers and systems since the existing systems have not proven up to scratch in the light of GDPR legislation.

Up to now, I’m not sure that many people have scrutinised LinkedIn in the same way.

At the moment it might be argued that LinkedIn is not supplying the tools to subscribers to allow them to comply with the GDPR legislation.  For example, I’m not aware of any functionality that allows an organisation to state "I wish to completely forget this person, and ensure that I cannot connect, view data on or contact them without their expressed consent”.  If that’s a minimum requirement of any internal system, why would it not be a minimum requirement for LinkedIn?

It could be that once that test case comes, a lot of organisations will take a look at LinkedIn and decide it doesn’t stand up, and it’s no longer worth the risk.

Scenario - LinkedIn, as the data controller, is responsible for the contact made by any users within the system.

This is potentially even worse for LinkedIn.  Since LinkedIn hold the data about people, provide the tools for discovering those people, provide the tools for contacting people, and for relaying those messages, it may be argued that it is up to LinkedIn to provide the mechanism to allow Users to state that they do not wish to be visible to or contacted by a given organisation.

That is, whilst it is another user who is sending the message, it may be that a future test case could state that LinkedIn are responsible for keeping track of who has ‘forgotten’ who.

By not providing that mechanism, and allowing users on the system to make contact when the contact is not welcome and against the target’s wishes, it’s possible that LinkedIn could be argued as being responsible for the unwelcome contact and therefore misuse of data.

Summary

Today, it seems that LinkedIn is in a bit of limbo.

There may be a recognised way to use LinkedIn in the GDPR era - find someone, check in my system that I’m allowed to contact them, go back to LinkedIn and contact them - but in order for that to work it requires the due diligence of recruiters to ensure that the law isn’t broken.

Realistically, something will have to change, or that test case is coming; at some point, someone is going to get an email that is going to break the limbo.

When that happens, I wonder which way it will go..?

Things I still believe in

Fri, 2018-10-19 09:49
Over 10 years ago I wrote a blog post on things that I believe in - as a developer, and when I re-read it recently I was amazed at how little has changed.

I'm not sure if that's a good thing, or a bad thing - but it's certainly a thing.

Anyway - here's that list - slightly updated for 2018... it you've seen my talk on Unit Testing recently, you might recognise a few entries.

(opinions are my own, yada yada yada)
  • It's easier to re-build a system from its tests than to re-build the tests from their system.

  • You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design.

  • Formal and flexible are not mutually exclusive.

  • The tests should pass, first time, every time (unless you're changing them or the code).

  • Test code is production code and it deserves the same level of care.

  • Prototypes should always be thrown away.

  • Documentation is good, self documenting code is better, code that doesn't need documentation is best.

  • If you're getting bogged down in the process then the process is wrong.

  • Agility without structure is just hacking.

  • Pair programming allows good practices to spread.

  • Pair programming allows bad practices to spread.

  • Team leaders should be inside the team, not outside it.

  • Project Managers are there to facilitate the practice of developing software, not to control it.

  • Your customers are not idiots; they always know their business far better than you ever will.

  • A long list of referrals for a piece of software does not increase the chances of it being right for you, and shouldn't be considered when evaluating it.

  • You can't solve a problem until you know what the problem is. You can't answer a question until the question's been asked.

  • Software development is not complex by accident, it's complex by essence.

  • Always is never right, and never is always wrong.

  • Interesting is not the same as useful.

  • Clever is not the same as right.

  • The simplest thing that will work is not always the same as the easiest thing that will work.

  • It's easier to make readable code correct than it is to make clever code readable.

  • If you can't read your tests, then you can't read your documentation.

  • There's no better specification document than the customer's voice.

  • You can't make your brain bigger, so make your code simpler.

  • Sometimes multiple exit points are OK. The same is not true of multiple entry points.

  • Collective responsibility means that everyone involved is individually responsible for everything.

  • Sometimes it's complex because it needs to be; but you should never be afraid to double check.

  • If every time you step forward you get shot down you're fighting for the wrong army.

  • If you're always learning you're never bored.

  • There are no such things as "Best Practices". Every practice can be improved upon.

  • Nothing is exempt from testing. Not even database upgrades or declarative tools.

  • It's not enough to collect data, you need to analyse, understand and act upon that data once you have it.

  • A long code freeze means a broken process.

  • A test hasn't passed until it has failed.

  • A test that can't fail isn't a test.

  • If you give someone a job, you can't guarantee they'll do it well; If you give someone two jobs you can guarantee they'll do both badly

  • Every meeting should start with a statement on its purpose and context, even if everyone in the meeting already knows.

Promises and Lightning Components

Wed, 2018-10-03 07:56
In 2015, the ECMA specification included the introduction of Promises, and finally (pun intended) the Javascript world had a way of escaping from callback hell and moving towards a much richer syntax for asynchronous processes.

So, what are promises?
In short, it’s a syntax that allows you to specify callbacks that should execute when a function either ’succeeds’ or ‘fails’ (is resolved, or rejected, in Promise terminology).

For many, they're a way of implementing callbacks in a way that makes a little more sense syntactically, but for others it's a new way of looking at how asynchronous code can be structured that reduces the dependancies between them and provides you with some pretty clever mechanisms.

However, this article isn’t about what promises are, but rather:

How can Promises be used in Lightning Components, and why you would want to?
As with any new feature of Javascript, make sure you double check the browser compatibility to make sure it covers your target brower before implementing anything.

If you want some in depth info on what they are, the best introduction I’ve found is this article on developers.google.com

In addition, Salesforce have provided some very limited documentation on how to use them in Lightning, here.

Whilst the documentations's inclusion can give us hope (Salesforce knows what Promises are and expect them to be used), the documentation itself is pretty slim and doesn’t really go into any depth on when you would use them.

When to use Promises
Promises are the prime candidate for use when executing anything that is asynchronous, and there’s an argument to say that any asynchronous Javascript that you write should return a Promise.

For Lightning Components, the most common example is probably when calling Apex.

The standard pattern for Apex would be something along the lines of:


getData : function( component ) {
let action = component.get(“c.getData");

action.setCallback(this, function(response) {

let state = response.getState();

if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
}
else if (state === "INCOMPLETE") {
// do your incomplete thing
}
else if (state === "ERROR") {
// do your error thing
}
});
$A.enqueueAction(action);
}

In order to utilise Promises in a such a function you would:
  1. Ensure the function returned a Promise object
  2. Call 'resolve' or 'reject' based on whether the function was successful


getData : function( component ) {
return new Promise( $A.getCallback(
( resolve, reject ) => {

let action = component.get(“c.getData");

action.setCallback(this, function(response) {

let state = response.getState();

if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
resolve();
}
else if (state === "INCOMPLETE") {
// do your incomplete thing
reject();
}
else if (state === "ERROR") {
// do your error thing
reject();
}
});
$A.enqueueAction(action);
});
}

You would then call the helper method in the same way as usual


doInit : function( component, event, helper ) {
helper.getData();
}

So, what are we doing here?

We have updated the helper function so that it now returns a Promise that is constructed with a new function that has two parameters 'resolve' and 'reject'. When the function is called, the Promise is returned and the function that we passed in is immediately executed.

When our function reaches its notional 'success' state (inside the 'state == "SUCCESS" section), we call the 'resolve' function that is passed in.

Similarly, when we get to an error condition, we call 'reject'.

In this simple case, you'll find it hard to see where 'resolve' and 'reject' are defined - because they're not. In this case the Promise will create an empty function for you and the Promise will essentially operate as if it wasn't there at all. The functionality hasn't changed.

Aside - if you're unfamiliar with the 'Arrow Function' notation - E.g. () => { doThing() } - then look here or here. And don't forget to check the browser compatibility.

So the obvious question is.. Why?
What does a Promise give you in such a situation?

Well, if all you are doing it calling a single function that has no dependant children, then nothing. But let's say that you wanted to call "getConfiguration", which called some Apex, and then *only once that was complete* you called "getData".

Without Promises, you'd have 2 obvious solutions:
  1. Call "getData" from the 'Success' path of "getConfiguration".
  2. Pass "getData" in as a callback on "getConfiguration" and call the callback in the 'Success' path of "getConfiguration"
Neither of these solutions are ideal, though the second is far better than the first.

That is - in the first we introduce an explicit dependancy between getConfiguration and getData. Ideally, this would not be expressed in getConfiguration, but rather in the doInit (or a helper function called by doInit). It is *that* function which decides that the dependancy is important.

The second solution *looks* much better (and is), but it's still not quite right. We now have an extra parameter on getConfiguration for the callback. We *should* also have another callback for the failure path - otherwise we are expressing that only success has a further dependancy, which is a partial leaking of knowledge.

Fulfilling your Promise - resolve and reject
When we introduce Promises, we introduce the notion of 'then'. That is, when we 'call' the Promise, we are able to state that something should happen on 'resolve' (success) or 'reject' (failure), and we do it from *outside* the called function.

Or, to put it another way, 'then' allows us to define the functions 'resolve' and 'reject' that will get passed into our Promise's function when it is constructed.

E.g.

We can pass a single function into 'then', and this will be the 'resolve' function that gets called on success.


doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) } );
}

Or, if we wanted a failure path that resulted in us calling 'helper.setError', we would pass a second function, which will become the 'reject' function.


doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) }
, () => { helper.setError( component ) } );
}

Aside - It's possible that the functions should be wrapped in a call to '$A.getCallback'. You will have seen this in the definition of the Promise above. This is to ensure that any callback is guaranteed to remain within the context of the Lightning Framework, as defined here. I've not witnessed any problem with not including it, although it's worth bearing in mind if you start to get issues on long running operations.

Now, this solution isn't vastly different to passing the two functions directly into the helper function. E.g. like this:


doInit : function( component, event, helper ) {
helper.getConfiguration( component
, () => { helper.getData( component ) }
, () => { helper.setError( component ) } );
}

And whilst I might say that I personally don't like the act of passing in the two callbacks directly into the function, personal dislike is probably not a good enough reason to use a new language feature in a business critical system.

So is there a better reason for doing it?

Promising everything, or just something
Thankfully, Promises are more than just a mechanism for callbacks, they are a generic mechanism for *guaranteeing* that 'settled' (fulfilled or rejected) Promises result in a specified behaviour occurring once certain states occur.

When using a simple Promise, we are simply saying that the behaviour should be that the 'resolve' or 'reject' functions get called. But that's not the only option

. For example, we also have: Promise.allWill 'resolve' only when *all* the passed in Promises resolve, and will 'reject' if and when *any* of the Promises reject.Promise.raceWill 'resolve' or 'reject' when the first Promise to respond comes back with a 'resolve' or 'reject'. Once we add that to the mix, we can do something a little clever...

How about having the component load with a 'loading spinner' that is only switched off when all three calls to Apex respond with success:


doInit : function( component, event, helper ) {
Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } );
}

Or even better - how about we call getConfiguration, then once that’s done we call each of the getData functions, and only when all three of those are finished do we set the flag:


doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } )
);
}

Or how about - we normally call three functions to get data, unless a flag is set, at which point we want to call a fourth function, and only when all four are complete do we set the flag:


doInit : function( component, event, helper ) {

let initialisations = [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ];

if ( component.get( 'v.runGetDataFour' ) {
initialisations.push( helper.getDataFour( component ) );
}

helper.getConfiguration( component )
.then( Promise.all( initialisations )
.then( () => { helper.setIsLoaded( component ) } )
);
}

Now, just for a second, think about how you would do that without Promises...

Throw it away - Why you shouldn't keep your POC

Sat, 2014-12-13 04:26

"Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

It can be tempting, whilst answering these questions to become attached to the code that you generate.

I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

Why do we set out on a proof of concept?

The purpose of a proof of concept is to (by definition):

  * Prove:  Demonstrate the truth or existence of something by evidence or argument.
  * Concept: An idea, a plan or intention.

In most cases, the concept being proven is a technical one.  For example:
  * Will this language be suitable for building x?
  * Can I embed x inside y and get them talking to each other?
  * If I put product x on infrastructure y will it basically stand up?

They can also be functional, but the principles remain the same for both.

It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

This is quite different to what we set out to do in our normal software development process. 

We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

What process do we follow when embarking on a proof of concept?

Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

To illustrate:

Will this language be suitable for building x?

You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

Can I embed x inside y and get them talking to each other

You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

If I put product x on infrastructure y will it basically stand up?

You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

Production development and Proof of Concept development is not the same

The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

That is the fundamental difference, and why you should throw your code away.

The opportunity cost of delaying software releases

Thu, 2014-10-09 05:56
Let me paint a simple picture (but with lots of numbers).

Some software has been built.  It generates revenue (or reduces cost) associated with sales, but the effect is not immediate.  It could be the implementation of a process change that takes a little time to bed in, or the release of a new optional extra that not everyone will want immediately.

It is expected that when it is initially released there’ll be a small effect.  Over the next 6 months there will be an accelerating uptake until it reaches saturation point and levels off.

Nothing particularly unusual about that plan.  It probably describes a lot of small scale software projects.
Now let’s put some numbers against that.

At saturation point it’s expected to generate / save an amount equal to 2% of the total revenue of the business.  It might be an ambitious number, but it’s not unrealistic.

The business initially generates £250k a month, and experiences steady growth of around 10% a year.

What does the revenue generation of that software look like over the first 12 months?
It’s pretty easy to calculate, plugging in some percentages that reflect the uptake curve:

Period Original Business Revenue Software Revenue Generation Additional Revenue1 £250,000.00 0.2% £500.002 £252,500.00 0.5% £1,262.503 £255,025.00 1.1% £2,805.284 £257,575.25 1.6% £4,121.20 5 £260,151.00 1.9% £4,942.876 £262,752.51 2.0% £5,255.057 £265,380.04 2.0% £5,307.608 £268,033.84 2.0% £5,360.689 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.4311 £276,155.53 2.0% £5,523.1112 £278,917.09 2.0% £5,578.34 Total: £51,539.34
Or, shown on a graph:




So, here’s a question:

What is the opportunity cost of delaying the release by 2 months?
The initial thought might be that the effect isn’t that significant, as the software doesn’t generate a huge amount of cash in the first couple of months.

Modelling it, we end up with this:

Period Original Business Revenue Software Revenue Generation Additional Revenue 1 £250,000.00 £- 2 £252,500.00 £- 3 £255,025.00 0.2% £510.05 4 £257,575.25 0.5% £1,287.88 5 £260,151.00 1.1% £2,861.66 6 £262,752.51 1.6% £4,204.04 7 £265,380.04 1.9% £5,042.22 8 £268,033.84 2.0% £5,360.68 9 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.43 11 £276,155.53 2.0% £5,523.11 12 £278,917.09 2.0% £5,578.34 Total: £41,250.69
Let’s show that on a comparative graph, showing monthly generated revenue:


Or, even more illustrative, the total generated revenue:


By releasing 2 months later, we do not lose the first 2 months revenue – we lose the revenue roughly equivalent to P5 and P6.
Why?

When we release in P3, we don’t immediately get the P3 revenue we would have got.  Instead we get something roughly equivalent to P1 (it’s slightly higher because the business generates a little more revenue overall in P3 than it did in P1).

This trend continues in P3 through to P8, where the late release finally reaches saturation point (2 periods later than the early release – of course).

Throughout the whole of P1 to P7 the late release has an opportunity cost associated.  That opportunity cost is never recovered later in the software’s lifespan as the revenue / cost we could have generated the effect from is gone.

If the business was not growing, this would amount to a total equal to the last 2 periods of the year.

In our specific example, the total cost of delaying the release for 2 months amounts to 20% of the original expected revenue generation for the software project in the first year.
And this opportunity cost is solely related to the way in which the revenue will be generated; the rate at which the uptake comes in over the first 6 months.

Or to put it another way – in this example, if you were to increase or decrease the revenue of the business or the percentage generation at which you reach saturation point the cost will always be 20%.

So, when you’re thinking of delaying the release of software it’s probably worth taking a look, modelling your expected uptake and revenue generation to calculate just how much that will cost you…

How do I type e acute (é) on Windows 8

Wed, 2014-10-08 09:27

I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol)

I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code.

They all suck.

And then I remember - it's easy.

You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented.

For example, CTRL + ' followed by e gives you é.

Brilliant!

The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes

Here are the ones I know about:

KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a = æ, o = œ, s = ß It doesn't quite work with every app (Blogger on Chrome, for example), but it certainly covers Office 2013, including both Outlook and Word.

Gamification to level 80

Fri, 2014-01-31 04:14
Since the end of July last year I've been test driving one of the latest online tools that hopes to change your life by giving you the ability to store your task lists.

Wow. What could be more underwhelming, and less worthy of a blog post?

Well, this one is different.  This one takes some of the huge amount of thinking on the behaviour of "millenials" and "Generation Y", adds a big dose of social context and ends up with something quite spectacular.

This is the gamification of task lists, this is experience points and levelling up, buying armour and using potions, this is World of Warcraft where the grinding is calling your mam, avoiding junk food or writing a blog post.

This is HabitRPG.
The concept is simple, you manage different styles of task lists.
  • If you complete entries on them you get experience points and coins.
  • If you fail to do entries them you lose hit points.

Depending on on whether you're setting yourself realistic targets and completing them you either level up, or die and start again.
Get enough coins and you can buy armour (reduce the effect of not hitting your targets), weapons (increase the effect of achieving things) or customised perks (real world treats that you give yourself).
There's a wealth of other treats in there too, but I don't want to spoil it for you, because as each of them appear you get a real jolt of surprise and delight (look out for the flying pigs)
.
So, what do I mean by "different styles of task lists". Well, the lists are split into three - Habits, Dailies and Todos:
HabitsThese are repeating things that you want to get into the habit of doing, or bad habits you want to break. 

They have no schedule, or immediate urgency, they just hang around and you come back every now and again to say "yup, did that".  You can set things up as positive or negative, and so state if they are a good or bad habit.

Examples might be:
  • Phone mother (positive)
  • Get a takeaway (negative)
  • Empty the bins (both - positive if you do it, negative if your partner does it)

DailiesSuffering from a bit of a misnomer, dailies are repetitive tasks with some form of weekly schedule. Things that you want to do regularly, and on particular days. You can set a task to be required every day, only every Tuesday, or anything between.

Whilst un-actioned habits are benign, if you don't tick off a daily then you get hurt.  With habits you're gently encouraged to complete them as often as possible. Dailies come with a big stick..
Examples might be:
  • Go to the gym
  • Do an uninterrupted hour of productive work

TodosThe classic task. The one off thing that you've got to do, and once its done you can cross it off and move on to the next thing.

In terms of functionality, they're pretty much the same as dailies - If you don't do a task it hurts.

Examples might be:
  • Write a blog post about HabitRPG
  • Book a holiday cottage in Wales

Other bits
They have a mobile app on both iOS and Android.  I use Android, and it does the job - nothing fancy, but it works.  Most of what you need to do is available to do on the move.

It's missing the ability to work offline, though it's not a huge problem.  I can imagine it being added soon, and I really hope it does.  Sometimes, sitting on the tube, I think of things that I need to do and it would be great to be able to add them to my task list without waiting until I get over-ground again.

Functionality is added regularly, and there is clearly a strong community spirit in the developers who are producing the site.  A kickstarter provided a boost to funds, but they seem to have worked out how to monetise the site and it looks like it'll keep being developed for some time - which is obviously good news!

There are a few community plug-ins out there (they made the good choice of using the public API to hook their UI up, meaning any functionality in the site is available in the API), including one that works like "stayfocused", monitoring your internet browsing habits and rewarding or punishing your HabitRPG character appropriately.

The API's also open up idea of a sales system driven by some of the concepts in HabitRPG, if not HabitRPG itself (though maybe with Ferrari's instead of Golden Swords).  I'd be amazed if this wasn't picked up by a Salesforce developer sometime soon...


Conclusion
I have to admit, I was excited about this idea the moment I heard about it, though I didn't want to blog about it straight away - I wanted to see if it had some legs first.

Sure there are other sites doing similar things, take a look at blah blsh and blah. But, excuse the pun,  this is another level.

When I first started using HabitRPG I had very short term goals. Your character is fragile, so naturally I did what I could to avoid getting hurt. I avoided unrealistic goals, or even goals that I might not get around to for a couple of days. Only todos I was likely to do that day got added.

As I've got further through I have found that I am more inclined to set longer target todos. They hurt you less as you have armour, and the longer you leave them the more XP you get. It sounds like cheating, but its not. It's simply that I've matured the way in which I use my task manager.

It's missing some things that I might expect from a really rich task manager - tags can be used to group items and tasks can be split with simple sub-tasks, but there's nothing more advanced than that - no dependent tasks, or chains of tasks for example.

But maybe the simplicity is key to its success. I rarely need more than a simple reminder, so why complicate things?

You have to be careful with the habits. It can be tempting to add a bad habit in there that you've already pretty much broken, but if Steven Levitt and Stephen J. Dubner are right then you'll end up replacing an intangible moral cost into a tangible HabitRPG cost and result in picking up that bad habit again.

It differs from sites like Strava, in that this is not primarily a competitive site - it needs to focus on the individual as it is trivially easy to "cheat".  You can add arbitrary tasks and complete them immediately - though it really defeats the purpose.  It relies on you entering a contract with yourself to use the site productively.  For that reason, any fundamental introduction to the site of competitiveness is flawed.

However, there is the concept of "challenges" - the idea that you can set goals, assign a prize and then invite people to compete.  It works, but only on the principle that people entering the challenges can be trusted.

All in all this has proven to be a pretty successful experiment for me - since I've started using it I've hardly missed a day at the gym, my washing basket is empty, all my shirts are ironed, I've managed to make it to yoga and I even call my dad more often.

And with a character at level 32 I'm becoming a god!

BBC and web accelerators don't mix

Wed, 2013-09-18 02:52
Do you have a problem with the BBC website? Even though you're based in the UK it seems to have a lot of adverts for American companies on it? And it's not that you don't like American companies, it's just that you've already paid your licence fee and that means that you shouldn't have to see any adverts at all.
Even worse than that, because it thinks you're not in the UK, it won't let you access any of the iPlayer content!

At some point in the last few weeks that started to happen to me on my Android (Nexus 10, no less) tablet. Thankfully I managed to solve it.

A quick scan of the BBC help seemed to just direct me to my ISP (they may route you through foreign / non-UK servers - I knew this wouldn't be true as my desktop works OK, and it doesn't sound like BT). A quick mail to them seemed to confirm my fears - no problem there.

A mail to the BBC was never going to be much use. I don't blame them, they have a lot of people to deal with and none of them pay a subscription fee that's optional. It makes no economic sense for them to have a good technical help line.

Any way, after a lot of Google searching for phrases like:
  • The BBC thinks I'm not in the UK when I am.
  • iPlayer thinks I'm not in the UK.
  • iPlayer won't serve me any content.
  • BBC has adverts on it.

And many other variations on the theme, I decided to go back to the BBC site and give it one last go.

On one of their help pages I spotted a pretty throwaway comment about web accelerators causing problems sometimes. Knowing that Google's a little too clever for its own good sometimes this seemed like a good avenue to check.

It turns out that this was the problem, and it's really easy to solve.

In Chrome (on Android 4.3 anyway) go to:
  • Settings
  • Advanced
  • Bandwidth management
  • Reduce data usage
  • Then in the tip right corner, flick the switch to off.

That simple.

Basically, if this is switched on then a lot of content isn't served from its source. Rather you get it from Google instead - it gets it from the source, simplifies it, re-compresses it and then sends it to you so that you can get a modest 20% saving on your download sizes.

The result is pretty much as the BBC describes it, your internet content is routed through non-UK servers. It's just that it's not down to your ISP doing, it's Google.

So, hopefully, when you get that dreaded "BBC says I'm not in the UK" feeling, your Google search will bring you here and you'll solve the problem in a fraction the time it took me!

Redundancies should come with a pay rise

Sat, 2013-08-31 10:46

As far as I can see, there is only one reason why a company should ever make redundancies.

Due to some unforseen circumstances the business has become larger than the market conditions can support and it needs to shrink in order to bring it back in line.

Every other reason is simply a minor variation or a consequence of that underlying reason.

Therefore, if the motivation is clear, and the matter dealt with successfully, then once the redundancies are over the business should be "right sized" (we've all heard that term before), and it should be able to carry on operating with the same values, practices and approach that it did prior to the redundancies.

If the business can't, then I would suggest is that it is not the right size for the market conditions and therefore the job isn't complete.

OK, there may be some caveats to that, but to my mind this reasoning is sound.

In detail:

When you reduce the headcount of the business you look for the essential positions in the company, keep those, and get rid of the rest.

Once the redundancies are finished you should be left with only the positions you need to keep in order to operate successfully.

It's tempting to think that you should have a recruitment freeze and not back-fill positions when people leave, but if someone leaves and you don't need to replace them, then that means you didn't need that position, in which case you should have made it redundant.

Not back-filling positions is effectively the same as allowing your employees to choose who goes based on their personal motives rather than force the business heads to choose based on the business motives.  This doesn't make business sense.

So, you need to be decisive and cut as far as you can go without limiting your ability to operate within the current market conditions.

To add to that, recruitment is expensive.  If you're in a highly skilled market then you'll likely use an agency. They can easily charge 20% of a salary for a perm head.  On top of that you have the cost of bringing someone up to speed, at a time when you're running at the minimum size your market will allow.  Plus there's the cost of inefficiency during the onboarding period as well as the increased chance of the remaining overstretched employees leaving as well.

The upshot is that you really can't afford to have people leave, it's so expensive that it jeopardises the extremely hard work you did when you made the redundancies.

There's a theory I often hear that you can't have contractors working when the perm heads are being marched out.  That's a perfectly valid argument if the perm head would be of long term value to you and can do the job that the contract head can do.  But if you need the contractor to do a job that only lasts another 3 months and that person is by far the best or only person you have for the job, then the argument just doesn't stand up.  Get rid of the perm position now and use the contractor, it'll be cheaper and more beneficial to the business in the long run.

OK, that's maybe not the most sentimental of arguments, but why would you worry about hurting the feelings of people who no longer work for you, at the expense of those that still do?

It may even be worse than that - you could be jeopardising the jobs of others that remain by not operating in the most efficient and effective way possible.

Another prime example is maternity cover.  If you need the person on maternity to come back to work then you almost certainly need the person covering them. If it's early in the maternity leave then you'll have a long period with limited staff, if it's late in the leave then you only need the temporary cover for a short period more. Either way you're overstretching the perm staff left to cover them and risking having them leave.

Finally, there's the motivation to ensure that the business that remains is running as lean as possible. That costs are as low as they could be. The temptation is to cut the training and entertainments budget to minimum and pull back on the benefits package.
As soon as you do this you fundamentally change the character of the business.  If you always prided yourself on being at the forefront of training then you attracted and kept staff who valued that. If you always had an open tab on a Friday night at the local bar, then you attracted people who valued that.  Whatever it is that you are cutting back on, you are saying to people who valued it that "we no longer want to be as attractive to you as we once were; we do not value you quite as much as we did". This might not be your intention, but it is the message your staff will hear.

I put it to you that the cheapest way to reduce costs after redundancies is to be completely honest to the staff you keep. Say it was difficult, say that you're running at minimum and that a lot will be expected of whoever's left. But tell them that they're still here because they're the best of the company and they are vital to the company's success.  Let them know that the contractors you've kept are there because they're the best people for those positions to ensure that the company succeeds.  Tell them that the contractors will be gone the moment they're not generating value or when a perm head would be more appropriate.  Make it clear that the company is now at the right size and the last thing you want is for people to leave, because you value them and that if they left it would damage your ability to do business.

Then give them a pay rise and a party to prove it.

Agile and UX can mix

Thu, 2013-08-29 05:19
User experience design is an agile developer's worst nightmare. You want to make a change to a system, so you research. You collect usage stats, you analyse hotspots, you review, you examine user journeys, you review, you look at drop off rates, you review. Once you've got enough data you start to design. You paper prototype, run through with users, create wireframes, run through with users, build prototypes, run through with users, do spoken journey and video analysis, iterate, iterate, iterate, until finally you have a design.

Then you get the developers to build it, exactly as you designed it.

Agile development, on the other hand, is a user experience expert's worst nightmare. You want to make a change to a system, so you decide what's the most important bit, and you design and build that - don't worry how it fits into the bigger picture, show it to the users, move on to the next bit, iterate, iterate, iterate, until finally you have a system.

Then you get the user experience expert to fix all the clumsy workflows.

The two approaches are fundamentally opposed.

Aren't they?

Well, of course, I'm exaggerating for comic effect, but these impressions are only exaggerations - they're not complete fabrications.

If you look at what's going on, both approaches have the same underlying principle - your users don't know what they want until they see something. Only then do they have something to test their ideas against.  Both sides agree, the earlier you get something tangible in front of users and the more appropriate and successful the solution will be.

The only real difference in the two approaches as described is the balance between scope of design and fullness of implementation. On the UX side the favour is for maximum scope of design and minimal implementation; the agile side favours minimal scope of design and maximum implementation.

The trick is to acknowledge this difference and bring them closer together, or mitigate against the risks those differences bring.

Or, the put it another way, the main problem you have with combining these two approaches is the lead up time before development starts.

In the agile world some people would like to think that developing based on a whim is a great way to work, but the reality is different. Every story that is developed will have gone through some phase of analysis even in the lightest of light touch processes. Not least someone has decided that a problem needs fixing.  Even in the most agile of teams there needs to be some due diligence and prioritisation.

This happens not just at the small scale, but also when deciding which overarching areas of functionality to change. In some organisations there will be a project (not a dirty word), in some a phase, in others a sprint. Whatever its called it'll be a consistent set of stories that build up to be a fairly large scale change in the system. This will have gone through some kind of appraisal process, and rightly so.

Whilst I don't particularly believe in business cases, I do believe in due diligence.

It is in this phase, the research, appraisal and problem definition stage, that UX research can start without having a significant impact on the start-up time. Statistics can be gathered and evidence amassed to describe the problem that needs to be addressed. This can form a critical part of the argument to start work.

In fact, this research can become part the business-as-usual activities of the team and can be used to discover issues that need to be addressed. This can be as "big process" as you want it to be, just as long as you are willing, and have the resources to pick up the problems that you find, and that you have the agility to react to clear findings as quickly as possible. Basically, you need to avoid being in the situation where you know there's a problem but you can't start to fix it because your process states you need to finish your 2 month research phase.

When you are in this discovery phase there's nothing wrong with starting to feel out some possible solutions. Ideas that can be used to illustrate the problem and the potential benefits of addressing it. Just as long as the techniques you use do not result in high cost and (to reiterate) a lack of ability to react quickly.

Whilst I think its OK to use whatever techniques work for you, for me the key to keeping the reaction time down is to keep it lightweight.  That is, make sure you're always doing enough to find out what you need to know, but not so much that it takes you a long time to reach conclusions and start to address them. User surveys, spoken narrative and video recordings, all of which can be done remotely, can be done at any time, and once you're in the routine of doing them they needn't be expensive.   Be aware that large sample sets might improve the accuracy of your research, but they also slow you down.  Keep the groups small and focused - applicable to the size of team you have to analyse and react to the data. Done right, these groups can be used to continually scrutinise your system and uncover problems.

Once those problems are found, the same evidence can be used to guide potential solutions. Produce some quick lo-fi designs, present them to another (or the same, if you are so inclined) small group and wireframe the best ones to include in your argument to proceed.  I honestly believe that once you're in the habit, this whole process can be implemented in two or three weeks.

Having got the go ahead, you have a coherent picture of the problem and a solid starting point for you commence the full blown design work.  You can then move into a short, sharp and probably seriously intense design phase.

At all points, the design that you're coming up with is, of course, important. However, it's vital that you don't underestimate the value of the thinking process that goes into the design. Keep earlier iterations of the design, keep notes on why the design changed. This forms a reference document that you can use to remind yourself of the reasoning behind your design. This needn't be a huge formal tome; it could be as simple as comments in your wireframes, but an aide mémoire for the rationale behind where you are today is important.
In this short sharp design phase you need to make sure that you get to an initial conclusion quickly and that you bear in mind that this will almost certainly not be the design that you actually end up with.  This initial design is primarily used to illustrate the problem and the current thinking on the solution to the developers. It is absolutely not a final reference document.

As soon as you become wedded to a design, you lose the ability to be agile. Almost by definition, an agile project will not deliver exactly the functionality it set out deliver. Recognise this and ensure that you do the level of design appropriate to bring the project to life and no more.

When the development starts, the UX design work doesn't stop. This is where the ultimate design work begins - the point at which the two approaches start to meld.

As the developers start to produce work, the UX expert starts to have the richest material he could have - a real system. It is quite amazing how quickly an agile project can produce a working system that you are able to put in front of users, and there's nothing quite like a real system for investigating system design.

It's not that the wireframes are longer of use. In fact, early on the wireframes remain a vital, and probably only coherent view of the system and these should evolve as the project develops.  As elements in the system get built and more rigidly set the wireframes are updated to reflect them. As new problems and opportunities are discovered, the wireframes are used to explore them.

This process moves along in parallel to the BA work that's taking place on the project. As the customer team splits and prioritises the work, the UX expert turns their attention to the detail of their immediate problems, hand in hand with the BAs. The design that's produced is then used to explain the proposed solutions to the development team and act as a useful piece of reference material.

At this point the developers will often have strong opinions on the design of the solution, and these should obviously be heard. The advantage the design team now have is that they have a body of research and previous design directions to draw on, and a coherent complete picture against which these ideas (and often criticisms) can be scrutinised.  It's not that the design is complete, or final, it's that a valuable body of work has just been done, which can be drawn upon in order to produce the solution.

As you get towards the end of the project, more and more of the wireframe represents the final product.  At this point functionality can be removed from the wireframe in line with what's expected to be built.  In fact, this is true all the way through the project, it's just that people become more acutely aware of it towards the end.

This is a useful means of testing the minimum viable product. It allows you to check with the customer team how much can be taken away before you have a system that could not be released: a crucial tool in a truly agile project.  If you don't have the wireframes to show people, the description of functionality that's going to be in or out can be open to interpretation - which means it's open to misunderstanding.
Conclusion
It takes work to bring a UX expert into an agile project, and it takes awareness and honesty to ensure that you're not introducing a big-up-front design process that reduces your ability to react.

However, by keeping in mind some core principles - that you need to be able to throw and willing to throw work away, you should not become wedded to a design early on, you listen to feedback and react, you keep your level of work and techniques fit for the just-in-time problem that you need to solve right now - you can add four huge advantages to your project.

  • A coherent view and design that bind the disparate elements together into a complete system.
  • Expert techniques and knowledge that allow you to discover the right problems to fix with greater accuracy.
  • Design practices and investigative processes that allow you to test potential solutions earlier in the project (i.e. with less cost) than would otherwise be possible, helping ensure you do the right things at the right time.
  • Extremely expressive communication tools that allow you to describe the system you're going to deliver as that understanding changes through the project.

Do it right and you can do all this and still be agile.

Remote workforces and the 12 golden questions

Fri, 2013-08-02 07:53
I had an interesting conversation with a friend the other day about the difficulties in managing a remote team. That is a team who aren't all located in the same office. Some may be home workers, some may work in different offices.  The main crux of the discussion was around how you turn a group of people into team, garner some emotional connection between them, and to you and your company, and then get the best out of them.

After a few days of gestation and rumination it came to me. The rules are the same as with a local team - you may do different things and the problems may be more difficult to overcome, but the techniques you use are fundamentally the same.

That thinking led me back to Marcus Buckingham's fantastic book "First Break all the Rules". If you manage people and haven't read this book - shame on you. It is a must read.

One of the main arguments in the book revolves around a set of questions you should ask of your staff defined by years of research by Gallup based on the strongest signifiers of a team that is performing well.

If you get good responses to these questions then you probably have a good team.

Now I'm not going to explain the why's and wherefores of these questions, that has been done far better than I ever could in Marcus's book. Buy it and read it.

What I'd like to do is go over each of the questions and look at what you may need to do as a manager of a remote team in order to ensure that you get positive responses to these questions.
I know what is expected of me at work.
Much like you would with a locally grouped team this is as simple, and as difficult as it sounds: keeping in touch, setting targets and boundaries, being available and honest. All those things that a good manager instinctively does.

The only real difference is that it takes more effort to organise those face-to-face chats.

It starts with honesty at the interview: clearly defining the role that's on offer, what's involved and what's not involved. From there it moves to regular catch ups to get a feel for where they think they are, and for you to feed back where they actually are, then finally to ensuring that rewards and praise are given when the expectations are met and exceeded.  Put in the simplest of terms you're regularly telling them what you expect then reinforcing that with action.

For some people this will feel like constant badgering, and for others you'll never be able to do enough, but I don't think there's anything about remote working that makes this fundamentally different to managing local workers.
I have the materials and equipment I need to do my work right.
Every tool you would normally provide in an office you should expect to provide for a remote worker. OK, maybe not the pen and pad, but you could consider corporate branded versions of both. At least it's a reminder of who they work for!

Every bit of software you would normally provide on a desktop needs to be available in their home office. 

Every document that they may need to access on the move should be available on-line   Workers that are expected to spend most their time on client sites should have access to software that is appropriate for onsite work from any device that has internet access.  Ideally they should have offline versions too. I.e. access to versions of their software that works when not connected to the internet, that will automatically sync when the connection is made available.  If you've ever used gmail, blogger or evernote on a disconnected tablet you will know what I mean.

You need to do everything you can to limit the chances that they'll ever be in a situation where they are disconnected from their tools.
At work I have the opportunity to do what I do best every day.
You might hope that this should be easier to achieve with remote workers than it would with a team in a single office.  Working on the move or at home gives people a chance to get on and do some work with out all those pesky distractions like other people.

However, its very easy to underestimate the impact remote working has on ease of communication, and in turn, the amount of time it takes to have those communications.  If you're not careful, those informal 2 minute chats in the kitchen turn into 1000 word project update documents.  You can see how there can be a death off a thousand cuts as layers of bureaucracy are added in order to keep everyone in the loop.

In addition, how can a manager see what a team member is best at when they don't physically witness them doing it.  It's not always easy in the office to spot someone's talents (or areas of difficulty for that matter) and guide them towards utilising them.  It's an order of magnitude harder when you don't spend that vital face to face time with them every day.

Ironically it can be tempting to have people fill in time-sheets and detailed updates in order to help spot the things that are done quickly and well, that are second nature, but then this simply distracts people from what they do best, and not everyone's talent is writing updates!

There's no simple answer to this. It takes a very special manager who can read their employees from a distance and a special kind of employee who is self aware enough to be honest about their strengths and weaknesses.  It starts with the culture of the management team and their all pervasive attitude towards spotting strengths.  They need to make sure that the workforce is constantly aware that this is the approach the management team is taking and that gives employees a strong incentive to be honest.

Part of that is then listening to your staff when they describe areas if difficulties. Sometimes this may highlight personal areas where the talents are lacking, in others it may be that the processes are getting in the way of providing real value. In either case you need to clearly assess the situation and act decisively and positively when needed.

It's vital that everybody is very clear about what they, and their team, do best and that people are allowed to focus on that as much as possible.
In the last 7 days I have received recognition or praise for doing good work.
This one should be simple.  All you have to do is follow the same rules that you normally would in the office: praise publicly or privately depending on the person you're dealing with.

Praise successes at the monthly get together, on the intranet, via mail, a conference call or a chat on the phone whichever is appropriate for the person and level of success.  However, whenever, just don't forget to do it.

Of course, you have to be much more diligent about this since the people you're praising aren't in front if you all the time.  It's harder to spot their frustration and disenchantment when they're not getting the praise they feel they deserve - you can't see their face and their minute by minute attitude.  For this reason I'd suggest that it's probably better to err on the side of too much praise than too little, and maybe even have a reminder in your calendar that pops up every couple of days so you don't forget.
My supervisor, or someone at work, seems to care about me as a person.
The main thing is honesty, and if you can fake that you've got it made...

In all seriousness though, you do actually need to care.  In order to care you need to connect with people. 

You'll spot a repeating theme here, and at the risk of sounding like a broken record, you can only connect with people if you communicate with them, and with a remote workforce that takes a lot of effort.

Whilst this point isn't just about the tough times, if you find someone's having a hard time then you need to break that remoteness, get yourself into their locale and meet up on neutral territory. Show that you care enough about them as a person that you'll take the time to go see them in their local café.  Show that its not all one way, that you'll make the effort.

It's about making sure that your team know that it's not all about the work they need to do today, but it's about them as a human-being having a valued place in a team that supports each other.

For some people it will be inappropriate to cross into the personal life, maybe they like working in a remote team because of the fact that its remote. However, it can still be valuable for those people to know that you understand and respect that, rather than simply don't care about them.

Even people who don't want regular catch ups want to be reminded that you know that and you're trying your best to act in line with their desires.

You have to be extremely careful about crossing people's personal boundaries and invading into their personal space.  Be honest with yourself about that, and recognise that not everyone wants their boss to be their best friend and that for most people it would be extremely distressing if you turned up on their doorstep unannounced!
There is someone at work who encourages my development.
When you're working remotely it can sometimes seem like you have nothing other than unrealistic demands, one after the other from a manager who can then veer wildly to forgetting you exist. This is what you need to try to overcome.

There needs to be a tough combination of slack in the schedule, freedom to explore and encouragement to follow new paths.

If your team have no time to do anything other than the day's work then they have no opportunity to develop.
If they have plenty of time, but no contact then they'll feel you don't care about their development.

You need to bring conversations on development to the front and ensure that they're had out loud.
Ensure that you have a process in place to discuss the direction your staff want to move in and ensure that they have the support they need in order to take those steps.  This may involve having decent expenditure on training, on in house resources and applications, it may be as simple as just letting your staff have time to explore.  It certainly includes letting them fail from time to time and not being judgemental about the outcome.

Not all this can be done remotely. It's tough to feel the support of someone that is not physically present, and  as with so many of these points you need to acknowledge that you're going to travel. You absolutely need some face to face time.

It may be that you need to put a central training team together and fly, train or bus people in to get their training.

You should!

It may end up being more expensive than it would have been to have a co-located office and training team, but that's the decision you took when you decided to employ a remote team.

Good quality learning and development software can help, as can access to third-party on-line training catalogues and I imagine that there is a greater return on investment on these tools than there would be in a local office.  However, making courses available to people is not the same as encouraging and supporting them in their development.

Consider mentoring programmes and ensure that you pay the expenses to get people together with their mentors.  Don't just assume that the mentors know what they're doing, put a mentoring team together so that they can support each other, and ensure that you have a training budget to teaching people how to be a mentor.  Don't forget, being a mentor can be a great way to develop the mentor!

If you want your team to think you're serious about their development, you need to get serious about their development.
At work my opinions seem to count
I'd suggest that in order for a new team member to feel their opinions matter they first need to feel that their co-workers' opinions matter.

From that you can then gestate the idea that they are allowed to have opinions, leading to you following through on some of their thoughts and ideas so that they feel their ideas matter.

Simple eh?

At the core of it, as always is the need to communicate. Not just back to the team member with the big idea, or serious concern, but with the whole team.

Regularly asking for feedback and opinions and then acting upon them. Becoming known as the manager that doesn't always assume that they know better.

Technology can help with this.  Open forums with no moderation (unless it's absolutely necessary). Having everyone involved in it, from the CEO to the intern, and a culture of respect around the postings that means every question or idea is addressed with care and thought.

That's not to say that every post is publicly stated as the best idea or most insightful question there has ever been, but that common courtesy and time is given in the response.  Most sane people have no problem being told they're wrong as long as it is clear and respectful and comes with an invitation for more.

There is also the HR angle: that people need to be able to state when they think a co-worker is not up to scratch, behaving inappropriately or suchlike.

Accessibility, openness and a visible commitment to acting on information is the only way to get this feeling fostered.  And guess what, it comes back again to two way communication.
The mission / purpose of my company make me feel my job is important.
OK, so it can seem that there's very little you can do about this, either your company resonates with your employees or it doesn't. The reality is that you can affect this quite significantly.

It's all too easy to recruit without your companies values in mind. And when I say values, I don't mean those in your company brochure, I mean those true values that actually drive the business.

An estate agents is never going to be driven by anything other than selling or letting houses, and that's the way it should be. There are different ways in which you a company may approach that, but the core value is one that selling houses is a good thing, and that you'll make money out of it.

Put simply, if you're an estate agent and you hire someone who thinks that a buoyant housing market, the need for a 'property ladder', low interest rates, and easy access to credit is a bad thing then you've hired someone who will never feel their job is important.

Consider that in your recruitment process.

I'm not saying that you can't , or shouldn't have a business with a mix of opinions, merely that you should honestly recognise the limitations of internal corporate marketing.

Having said that, you do need to market the business internally. You still need to remind people why they are here, and why the company is doing what it's doing. If you don't define the culture of the business then individuals will impose a culture upon it and it may not be the one you want. An outgoing but negative employee can very easily, and often quite unintentionally impose a negative culture on the whole of a department.

As with so many of these topics, communication is the key, more so with a remote workforce than at any other time.

Let the team know what the company feels is important, and make sure you don't stray too far from the credible truth or your employees will start to think you stand for lies.
My co-workers are committed to doing quality work.
There are three significant risks with a remote workforce that can put this into jeopardy.

First - it can be difficult to spot when you have a member of the team that's not committed to quality work.

Second - it can be difficult to sot someone who thinks their team-mates are not committed to quality work.

Third - it can be difficult to ensure that everyone knows what quality work their team-mates are doing.

With many of the other points the focus is on communication in order to feedback on progress both up and down the chain of command. This is much more focused on the sideways communication.

At the simplest level this is about regular cross team updates where you ensure that everyone knows what's going on in the whole team, particularly highlighting points of note.  This directly addresses the third risk, but doesn't deal with the other two.

You need to follow it up by fostering an environment where feedback on peers is taken seriously.  You need to ensure that your team feel comfortable asking about their team mates' progress, or pointing out areas of concern or difficulty.

This involves giving an honest and clear response.

If you feel the comments are unjustified you need to be able to clearly state why, but still then ensure you take the comments on board and react to them. Recognise that they may know more about the situation than you do.  You need to give that dual impression - you value feedback, and that you value your staff - you'll hear criticism and concern and act to rectify issues, but you'll defend and protect when it is unjustified.
I have a good friend at work?
Obviously a collection of remote workers have far fewer chances to socialise than those working together in an office.  They'll never just decide to go to the pub on a Wednesday evening and never naturally make those odd cross department smoking cliques, nor football ones neither - all simply because they're not at the office.  This means they are far less likely to make the same kinds of personal connections than they would otherwise.

The problem and potential solutions are fairly clear but easy to overlook.

You absolutely have to have a higher than usual entertainments budget. You have to meet up at least every month in order for those face to face relationships to blossom. But it's more than that. You have to foster an environment where building remote relationships is also the norm. You have to provide virtual replacements for the Wednesday evening pub and smoker's corner.

For example, your management team must have a relaxed attitude when communicating via mail.  It has to be clear that email system is more than just a business tool, that it can be a social one too. You have to make an effort to build an environment in which social networks will blossom.

Consider tools like Yammer (corporate social networking site) and then push the management to actually use them, for a combination of business and social reasons.

Provide the mechanism to allow for the hosting of virtual book clubs, badminton ladders and a Modern Warfare 3 clan.

Recognise the kinds of people you have employed and ensure that they have a means of accessing people at work who are like minded and then make it feel normal that they will reach out and find each other.

What offices you do have, don't be afraid to add a big chill out area and kitchen so that when people are in the office they get that reinforcement - "this is a company where we actively encourage you to be friends"
In the last six months someone at work has talked to me about my progress.
There is no reason why this should be difficult. Organise regular meetings, on-line or otherwise, to discuss progress. Have a solid process in place that can flex for individual needs.  All the things you would normally do.  Every six months is a bare minimum, every two is OK, once a month is ideal - as a general rule.

I could labour the point, but I think most of what needs to be said has been said already!
This last year I have had opportunities at work to learn and grow.
It can be very tempting to feel that your home workers are sitting at home happy in the knowledge that they're doing a good job and have a great work home life balance. Maybe that's true. Maybe all they want is to get their job done and then play in the garden with their kids.

Maybe.

However, just because they're remote doesn't mean they're not ambitious.  I don't think there's any reason why a home worker will be any less likely than an office one to want to progress, either in their career, or personally.

Also, not every remote worker is a home worker.

Those team members that are sitting at a desk 50 miles away, out of sight, are more able to look for opportunities outside of your company than someone that's sat 5 metres away.  Take their progress as seriously as you would any other staff member's.

Catch up regularly to learn about their goals and then do what you can to help them reach the realistic ones, learn about their career concerns and do what you can to help them overcome them, or to placate them.
Tailor your roles to suit the talents and desires of your team members and make sure you give the ones who need, deserve and are up to it the opportunity to stretch themselves in new directions.

If you don't give your team members the encouragement and opportunity to develop then they'll find the opportunities through a new role in a new company, and just like your local workers, you'll have no idea it's going to happen until it's too late.

So, do more than you think you need to!
Conclusion
Good management is good management, regardless of how local or remote the team is, and good management takes effort.

The truth of the matter is that with a remote workforce that effort is increased.  You need to be more astute, more available and more willing to put the effort in than if your team is sat next to you.  You lose so many of the visual and social clues that a good manager uses every day to gauge the health of its team that you need to compensate in many other areas.  You also have to acknowledge that you're not likely to be as effective, it simply isn't possible.

You need to get imaginative about how you remain in contact, how you foster a team spirit and an emotional connection.  Technology plays a part, of course it does. Good collaboration tools with social media aspects make it possible to create social groups within your company and allow those people to seek out like minded individuals in a way that simply wasn't possible, or necessary, 10 years ago.  However, the technology isn't a panacea. You still need to create an environment in which people actually want to connect.  Without the right cultural context, you'll simply have a dead application
.
Still, the rules are simple and the techniques familiar.  There's nothing fundamentally different about managing a remote team, you're still dealing with people, after all.

If you honestly care about your role as a manager, a need to create a team that performs and are willing and able to put the time in, then you probably won't go far wrong.

Measuring the time left

Sun, 2013-06-09 08:30
Burn-down (and burn-up, for that matter) charts are great for those that are inclined to read them, but some people don't want to have to interpret a pretty graph, they just want a simple answer to the question "How much will it cost?"

That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.
The basis.It starts with certain assumptions:
You are using stories.OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.
Your customer team is used to writing stories of the same size.When your customer team add stories to the mix you can be confident that you won't always have to split them into smaller stories before you estimate and start working on them.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.
You estimate using a numeric value.It doesn't matter if you use days work, story points or function points, as long as it is expressed as a number, and that something estimated to take 2 of your unit is expected to take the same as 2 things estimated at 1.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.
Your developers quickly estimate the bulk of the work before anything is started.This is not to say that the whole project has a Gandalf like startup: "Until there is a detailed estimate, YOU SHALL NOT PASS"; rather that you T-shirt cost, or similar, most of your stories so that you have some idea of the overall cost of the work you're planning.
You need this early in the project so that you have a reasonable amount of data to work with
Your developers produce consistent estimates.
Not that your developers produce accurate estimates, but that they tend to be consistent; if one story is underestimated, then the next one is likely to be.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.
You keep track of time spent on your project.Seriously, you do this right?
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent

If you have all these bits then you've got something that you can work with...
The calculation.The calculation is simple, and based on the following premises:

  • If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
  • The level of defects created by the developers and found by the customer team will remain constant through the whole project.
  • Defects need to be accounted for in the time remaining.
  • Un-estimated stories will be of a similar size to previously completed work. 
The initial variables:

totalTimeSpent = The total time spent on all development work (including defects).

totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.

numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.

storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.

totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.

numberOfStoriesCompleted = The count of number of a stories that have been completed by the development team and released to the customer.

numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.

numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Using these we can work out:
Time remaining on work that has been estimated by the development team.For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.


estimateAccuracy = totalTimeSpent / storiesCompletedEstimate

predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )
Time remaining on work that has not been estimated by the development team.In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.

averageStoryCost = totalTimeSpent / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost


averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted

predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost 
Total predicted time remainingThe remaining calculation is then simple, it's the sum of the above parts.
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.

totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects 
The limitationsI find this calculation works well, as long as you understand its limitations.
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.
Further WorkI am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.
Estimating number of defects not yet found.
It seems reasonable for you to look at the average number of defects raised per story accepted and use this to work out the number of defects that have not yet been found.  These could then be included in your calculation based on the average cost of defects that you've already fixed.
This might be a good idea if you have a high level of defects being raised in your team.  I'd say high as meaning anything over about 20% of your time being spent fixing defects.
Using the estimate accuracy of previous projects at the start of the new.
As I pointed out earlier, a limitation of this method is the fact that you have limited information at the start of the project and so you can't rely on the numbers being generated for some time.  A way of mitigating this is to assume that this project will go much like the previous one.
You can then use the estimate accuracy (and defect rate, if you calculated one) from your previous project in order to mitigate the lack of information in this.
If you're using the same development team and changing the same (or fundamentally similar) applications, then this seems entirely appropriate.

*1 Semi-agile: I'd define this is where the development of software is performed in a full agile manner, but the senior decision makers still rely on business case documentation, project managers and meeting once a month for updates.

Pleasing line

Mon, 2010-05-17 02:47
Gotta admit, I'm quite pleased with this line from my new ORM object based database connection library...



$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');


Pages