Rob Baillie
Lightning Web Components - Conditional formatting and iterators
One of the really elegant parts of Lightning Components was the ability to conditionally apply classes based on data.
This is something that is no longer available to us, as the expressions we are allowed to put into templates are now limited to either:
- A property of the Javascript class (or a sub-property that can be referenced through a top level property).
- A getter, that accepts no parameters.
I’ve already mentioned a change in this area in this blog post about building re-usable components, but I felt it was time to drill into this just a little further...
The scenarioLet’s say we want to render a list of objects. There’s a flag 'isSelected' on each of the records, and if that flag is set we want to change the rendering for that particular record.
JSON data:
Required Output:
[ { "id" : 1
, "name" : "Anne"
, "isSelected" : false },
{ "id" : 2
, "name" : "Bob"
, "isSelected" : true },
{ "id" : 3
, "name" : "Carla"
, "isSelected" : true }
]
Lightning Component
<ul>
<li class="record">Anne</li>
<li class="selected record">Bob</li>
<li class="selected record">Carla</li>
</ul>
In a Lightning Component, this would be near trivial, as we could use a ternary operator in the template to render the inclusion of the 'selected' class conditionally.
<ul>
<aura:iteration items="{!v.records}" var="thisRecord">
<li class="{!(thisRecord.isSelected?'selected':'' ) + ' record' }">{!thisRecord.name}
</aura>
</ul>
The reason this is so simple, is that we could put proper expressions into our replacements in Lightning Components, giving us fantastic flexibility in the output for each individual attribute.
Unfortunately (for this case), this isn't possible in Lightning Web Components...
Lightning Web ComponentFirst up, let's just clarify what we mean when we say we can no longer do the string concatenation, or the ternary operator in an attribute expression, as I detailed in my earlier post.
What we mean is, we can’t do the following:
<ul>
<template for:each={records} for:item="thisRecord">
<li class="{thisRecord.isSelected?'selected':'' + ' record' }">{thisRecord.name}</li>
</template>
</ul>
All we can do is reference a single value from our data held against our Javascript object, or call a getter against it. E.g. (not that this template is of much use to us right now)
<ul>
<template for:each={records} for:item="thisRecord">
<li class={thisRecord.isSelected}>{thisRecord.name}</li>
</template>
</ul>
OK - so what other options do we have?
Option 1 - Build your class lists in your data:So, we could build up the list of classes that we want to render against each record in our data - once the data is populated from where-ever, we can loop over the records and update the data so we end up with something like:
JSON data:
[ { "id" : 1
, "name" : "Anne"
, "isSelected" : false
, "classes" : "record" },
{ "id" : 2
, "name" : "Bob"
, "isSelected" : true
, "classes" : "selected record" },
{ "id" : 3
, "name" : "Carla"
, "isSelected" : true
, "classes" : "selected record" }
]
We can then render the required output like this:
Pros:
<ul>
<template for:each={records} for:item="thisRecord">
<li key={thisRecord.id} class={thisRecord.classes}>{thisRecord.name}</li>
</template>
</ul>
- The template is simple.
- Since we're processing in Javascript, we can draw on any information and make whatever complex rules we want.
- We need to process the data after we’ve built it. If this data comes from a @wire’d property or function, then we’re in trouble. We’d need to jump through some hoops to get the classes specified. We'll likely find using @wire is no longer an option because of that processing and we need to retrieve with an imperative call instead.
- Data retrieved from Apex is immutable, so if this is the only thing we need to add to the data, then may find that we need to copy data into new objects, or add a new data structure in order to get the classes property added.
- The logic for the classes that each record should have assigned is held in Javascript (or even worse, Apex).
If we want to avoid doing anything complex in our Javascript, we can add template 'if's into the markup, and conditionally render the <li> tag in its two different forms.
For example, we could do the following:
Pros:
<ul>
<template for:each={records} for:item="thisRecord">
<template if:true={thisRecord.isSelected}>
<li key={thisRecord.id} class="selected record">{thisRecord.name}</li>
</template>
<template if:false={thisRecord.isSelected}>
<li key={thisRecord.id} class="record">{thisRecord.name}</li>
</template>
</template>
</ul>
- The Javascript doesn't contain any of the logic for the conditional rendering.
- We're breaking the "Don't repeat yourself" (DRY) principle, and repeating the structure of the <li> tag in each side of the IF condition. In this simple case this may not seem like a big deal, but still - any change to that rendering now needs to be made in 2 places, instead of 1. And let's be honest, how often is the case this simple? We'll probably find that we have to copy a LOT of the template to work like this.
Another alternative is to use the template if, but to isolate the part that changes from the part that doesn't. That is, we introduce HTML inside our 'if:true' that only exists in order to apply the formatting that should be assigned to the 'isSelected' records.
That is, we do the following in our template, to introduce a new, standalone, div that has the 'selected' class applied, and then wrap the content of the <li> in another div.
<ul>
<template for:each={records} for:item="thisRecord">
<li key={thisRecord.id} class="record">
<template if:true={thisRecord.isSelected}>
<div class="selected"></div>
</template>
<div>{thisRecord.name}</div>
</li>
</ul>
Having done this, we can use more advanced CSS selectors to apply our 'selected' style to the div that follows the div with 'selected' as its class.
For example, let's say our 'selected' records should have a green border:
.selected+div {
border: 1px solid green;
}
The selector '.selected+div' means 'The div that follows the tag with the class 'selected'.
You can read about CSS Selectors here.
Pros:- We conditionally render only the addition of the class in the template - nothing is repeated.
- The Javascript doesn't contain any of the logic for the conditional rendering.
- We need to introduce additional structure into the HTML that exists purely to apply classes to other elements. This isn't ideal and can change the behaviour of other classes further down the structure (e.g. we have introduced a span here - what impact does that have?)
It would be good if we could call a getter function at run time in order to get the list of classes, along the lines of the component described in the earlier post.
The problem is that we can't call a function and pass it the context of the individual record that we are rendering.
So does that mean we can't call a function?
No, it just means that we need to narrow the context of the component down into each record before we call the function - and we can do that with a sub-component that just renders the <li>.
We can call our sub-component (recordRenderer) with something like this:
<ul>
<template for:each={records} for:item="thisRecord">
<c-record-renderer key={thisRecord.id} record={thisRecord}></c-record-renderer>
<template>
</ul>
Our sub-component template can be:
<li class={classes}>{record.name}</li>
And our sub-component javascript can be:
Pros:
import { LightningElement, api } from 'lwc';
export default class RecordRenderer extends LightningElement {
@api record;
get classes() {
if ( this.record.isSelected ) {
return 'selected record';
}
return 'record'
}
}
- Both the template for the parent and sub component are very simple and focused on small aspects of the rendering - no IFs in the templates is a good thing.
- It's very possible that there will be more complex behaviour required in the future, and having the context set to be the individual record could make that behaviour much simpler to implement in the future. For example, passing data into events due to a 'click' event could be well served if we have that distinction between the parent and child components and context.
- OK, we have the classes logic inside the Javascript, much like in Option 1, but we don't have to process the data - the getter is called at render time. And, presumably the code in the renderer is near trivial, so maybe that's not such a big problem.
The limitations of the expressions allowed in templates makes for a less elegant solution to this kind of problem
I would find it hard to argue that either Option 1 or 2 or worth the pain - neither fiddling with the data model or repeating large chunks of the template feel like good practice to me.
Options 3 and 4 both have their merits - Option 3 has very few drawbacks other than the introduction of the almost redundant HTML elements - but if you can fulfil your needs in this way, well the CSS selectors are there for a reason, so why not?
Personally though, I'd almost always favour Option 4 - decomposing into smaller components.
Honestly, I think that pushing us down this route is a good thing - smaller components are simpler to understand just as smaller classes and methods are, and with the standardised event model that Lightning Web Components use, this approach can make for very simple, small, and potentially re-usable components that makes each individual component present themselves in much simpler ways.
It may require us to think about our components in a different way to how we approached Lightning Components, but I firmly believe that we will end up with cleaner, and more easily understandable and maintainable building blocks.
Lightning Web Components - Events, and listening to your children
Another fantastic inclusion in Lightning Web Components is the completely reworked events model.
De-composing functionality and building smaller, and more generic building blocks has become much simpler and much more intuitive.
In the world of Lightning Components I never got on with events. The idea of adding a Salesforce configuration for an event, registering events on the dispatcher template, and then registering listeners on the receiving template seemed really cumbersome. And then added onto that was the differences in syntax between component and application events. They just felt really unnatural.
In Lightning Web Components all this has become significantly simpler, and much more in-keeping with the standard HTML / Javascript model.
We've already seen how we can use @api allow state to be passed into our components. Now we're talking about notifying our parents when events occur.
I could go into deep detail on how this is done, but the documentation on this area is spot on, and there's no need to repeat it - follow the guide in the docs and you can't go far wrong. It's particularly well written and introduces the concept brilliantly.
That said, there has to be something to say, right?
Well, yes, and before I go into some of the less obvious limitations, let's just present a simple example:
- In the child component, we create and dispatch an event.
- When you include the child component, specify the handler for the event
Something along the lines of:
Child component's JavascriptParent component's template
import { LightningElement, track } from 'lwc';
export default class ChildComponent extends LightningElement {
@track value;
// Called from the onchange handler on an input
handleValueChanged( event ) {
this.value = event.target.value;
this.dispatchEvent( new CustomEvent( 'valuechanged', { detail: this.value } ) );
}
}
Parent component's Javascript
<c-child-component onvaluechanged={handleOnValueChanged}>
import { LightningElement, track } from 'lwc';
export default class ParentComponent extends LightningElement {
@track updatedValue;
handleOnValueChanged( event ) {
this.updatedValue = event.detail;
}
}
OK. So how simple is that? No Salesforce configuration to create, nice simple syntax, event handlers defined in the template, exactly the same way you would if it was a standard HTML tag
Without wanting to repeat the documentation from Salesforce, it's worth calling out a few important points:
- dispatchEvent and CustomEvent are standard Javascript.
- When you include the child component, you specify the handler for the event in the template.
- The event should not start with 'on', and the attribute you assign the handler to will have 'on' appended to the start.
- The fact we can specify the handler as 'onvaluechanged' when we create the tag is LWC specific, and for very good reason (explained later). You cannot do this with standard Web Components.
- We can pass data from the child component in the event, by passing an object as the second parameter.
- Note that the data can only be in the 'detail' property. If you add data to any other property you may accidentally overwrite a standard property, and if you don't use another standard property it won't be visible in the event anyway - you put data into 'detail', and that's all you have. Live with it.
- You can pass an object, but if you do you should construct it there and then. But you probably shouldn't.
OK, that's all well and good - but where are the limitations?
Well, the main one I've found was a real surprise to me - to the point that I'm worried that I've misunderstood something.
In the standard Javascript events model - all events propagate to all levels.
For example, if I have the following HTML:
<div id="grandparent" onchange="handleChange();">
<div id="parent">
<div id="child">
<input onchange="handleChange();"/>
</div>
</div>
</div>
When the value of the input changes, the onchange event is handled by both the onchange handler on the input and the 'grandparent' div. Events propagate through the whole DOM, unless a handler stops it by calling 'stopPropogation' against the event.
It's generally recognised that this is a good thing, and that events should not be stopped unless there's very good reason.
However, as far as I can see, this is not true when you cross boundaries between LWCs.
For example, if I had the above example for a child component, and included it in a parent as such:
Parent component's template
<c-child-component onvaluechanged={handleOnValueChanged}>
And then included that in the grandparent as such:
Grandparent component's template
<c-parent-component onvaluechanged={handleOnValueChanged}>
Assuming that the parent component does not raise a 'valuechanged' event of its own, the 'onvaluechanged' handler on the grandparent component will never get called.
It seems that you can only handle a component's event in its parent's scope.
Note: these are actually slightly different scenarios I'm explaining, but I think it's worthwhile in order to illustrate the point. Also, there is a 'bubbles' property that you can set on the CustomEvent when you create it, although I didn't see a change in behaviour when I did that.
As I've said, I'm surprised by this behaviour, so am happy to be told I'm wrong, and learn where my mistake is.
So what of the 'on' behaviour? Why is this such a cool addition?
Well, that's best explained by illustrating what we would need to do if this wasn't available to us.
Let's go back to our child component
Child component's Javascript
import { LightningElement, track } from 'lwc';
export default class ChildComponent extends LightningElement {
@track value;
// Called from the onchange handler on an input
handleValueChanged( event ) {
this.value = event.target.value;
this.dispatchEvent( new CustomEvent( 'valuechanged', { detail: this.value } ) );
}
}
It dispatches a 'valuechanged' event that we can handle in a parent component.
We include the child component with a simple node: Parent component's template
<c-child-component></c-child-component>
Note we are no longer setting onvaluechanged because, in our hypothetical scenario, this is not possible.
Now, in order to handle the event we need to attach a handler to the component in our parent component's Javascript
First we need to find it, so we set a property on the component that we can use to retrieve it. You may default to setting an 'id', but it turns out that Salesforce will adjust the ids on nodes, so we can't rely on that. Instead, we decide to set a class:
Parent component's template
<c-child-component class="child"></c-child-component>
Now, the parent component's Javascript. We need to hook into one of the lifecycle callbacks in order to attach our handler
You can see the docs for those functions here.
From there we find:
- We can't use the constructor, as the component hasn't been added to the DOM yet.
- We can't use the connectedCallback, as the component's children haven't been rendered yet.
- We can use the renderedCallback, but this gets called multiple times - whenever any reactive properties change, so we need to protect against multiple adds.
So, maybe we can do this:
allocatedEventListeners = false;
renderedCallback() {
if ( ! this.allocatedEventListeners ) {
this.template.querySelector('.child').addEventListener( this.handleOnValueChanged ).bind() );
this.allocatedEventListeners = true;
}
}
That is a bit clunky, but it looks like it should work. We 'bind' the 'handleOnValueChanged' function to the event listener.
Unfortunately, it doesn't. Because of a fundamental capability of Javascript - it appears that the event handler doesn’t have access to ‘this’. And if you’re not an experienced Javascript developer then that’s when things start to get a bit crazy (actually, even if you ARE an experienced Javascript developer, I suspect it STILL gets a little messed up).
Basically, 'this' isn’t guaranteed to be what you think it is. If you write code that behaves in a procedural way, then it will generally be the object in which the method is defined. But as soon as you add in callbacks, Promises and asynchronous behaviour, it isn't guaranteed to be.
'this' can be simply the context in which the function runs, rather than the object or class in which the function is defined. This is an incredibly powerful aspect of Javascript that is very difficult to get to grips with unless you’re used to seeing it.
In Lightning Components you can see the effect of this in code such as Apex callouts in helpers where you end up with:
let self = this;
In our particular case, you could use an alternative - the fat arrow notation for defining functions.
event => { this.handleOnValueChanged( event ) }
Which is *would* transpile to (or is synonymous with) this:
function handleEvent(event) {
var _this = this;
( function (event) { _this.handleOnValueChanged(event); });
}
Look familiar?
The resulting code for adding the event handler could end up like this:
allocatedEventListeners = false;
renderedCallback() {
if ( ! this.allocatedEventListeners ) {
this.template.querySelector('.child')
.addEventListener( 'valuechanged',
( ( event ) => { this.handleOnValueChanged( event ) } ).bind() );
this.allocatedEventListeners = true;
}
}
In the end, this would work. But no-one would suggest it was elegant. And in order to get it working we had to brush up against some advanced behaviour of 'this'. Now, I admit that people are going to have to learn how 'this' and its binding behaves in order to write reliable Lightning Web Components - but just to add an event handler?
The reality is that we don't have to think about it - Salesforce have given us a very usable shorthand for it, and we should be extremely grateful for it!
Lightning Web Components - Unit Testing immediate impression.
One of the things I’m most excited about in Lightning Web Components is the ability to write Unit Tests for the components.
The unit testing framework of choice is Jest, and it looks well suited. Not least it’s the framework of choice for Facebook, and describes itself as well suited to React apps. Why should that matter? Well, React is a 1-way bound Javascript framework - and so is LWC.
So I was looking forward to get into Unit Testing, following the documentation for testing wired components
Unfortunately, the documentation didn’t work out for me, and it looked like there’s a couple of mistakes in there:
The example ‘getRecord.json’ file isn’t valid JSON.
In order for the file to work, the field names need to be surrounded in double quotes.
I.E. Instead of:
// getRecord.json
{
fields: {
Name: {
value: "DYNAMO X1"
}
}
}
The file should be:
// getRecord.json
{
"fields": {
"Name": {
"value": "DYNAMO X1"
}
}
}
Interrogating the ‘element’ for its state does not seem to work.
Instead, I found that I needed to get data from the document object.
I.E. The following does not work:
// Resolve a promise to wait for a rerender of the new content.
return Promise.resolve().then(() => {
const content = element.querySelector('.content');
expect(content.textContent).toBe("Name:DYNAMO X1");
});
But the following does:
// Resolve a promise to wait for a rerender of the new content.
return Promise.resolve().then(() => {
const content = document.body.querySelector('.content');
expect(content.textContent).toBe("Name:DYNAMO X1");
});
Mocking doesn't seem to work for Apex, only LDS
From the quick test I did, I could get the mocking framework to work for the Lightning Data Service, once my implementation of the example was tweaked. However, I couldn't get it to work with an imported Apex method
I didn't see a reference to this being missing, though I guess I may have just missed that, and I know that the recommendation is to use LDS whenever you can. I just worry that there's a really important use case out there - it seems natural to me that components that use custom Apex are likely to be more complex than ones that use LDS. And with that in mind, it feels like missing Apex will be a big loss to the testing framework.
Hopefully the last part is already known about, is a bit of missing documentation, or is simply that I misunderstood something.
Whatever the case, I plan on doing more investigations into the Unit Testing capabilities, and will obviously blog my findings - but I have to admit that I found the initial experience a little disappointing after the extremely polished experience up to now.
I sincerely hope that it isn’t an indicator that Unit Testing is bit of an after-thought.
Lightning Web Components - @api, slots and getters
I've blogged about a few of the behaviours of Lightning Web Components, but the proof is really in building useful bits. What happens when you actually try to make a re-usable component?
For our example, we'll rebuild 'ui:message'. A now (seemingly) defunct base component that would render a message in a box that is coloured based on the 'severity' of the message being shown. In the original it could be set to 'closable', although we're going to ignore that and focus on just the rendering of it.
In a Lightning component we would use it like this:
Original usage - Lightning Component
<ui:message title="Error" severity="error" >{!v.errorMessages}</ui:message>
Ideally, the version we will create, would be used like this:
Desired usage
<c-message title="Error" severity="error" >{errorMessages}</c-message>
Looks pretty straightforward, and actually - it is. Just as long as we know about a few simple concepts.
Before we go into them, let's see what a working example could look like:
Javascript componentHTML Template
import { LightningElement, api } from 'lwc';
export default class Message extends LightningElement {
@api title;
@api severity;
get classes() {
return this.severity + ' uiMessage';
}
}
<template>
<div class={classes} role="alert" >
<div class="uiBlock" >
<div class="bBody" >
<h4>{title}</h4><slot></slot>
</div>
</div>
</div>
</template>
OK then, let's pick a few of these bits apart, and hopefully we'll explain a few little behaviours along the way.
First up, let's take a look at the '@api' declarations.
@apiThe @api property lines are pretty simple to understand - they define that 'title' and 'severity' are publicly available properties of the component. In the context of Lightning Web Components, public and private mean 'available outside of the component, and invisible to the outside of the component'. It's tempting to think that this is to do with the scope of the Javascript, but it's not.
That is, every property of the Javascript component is available to be referenced in the HTML template - whether it is 'public' or 'private'. One way of thinking about it is that the HTML template forms part of the component, and so it can see private properties.
Another (probably more accurate) way of thinking about it is that the template is processed by the Javascript component (that code it's immediately obvious, but it's almost certainly in LightningComponent - which this class extends), and the Javascript can see its own properties, so the private ones are available.
However, other components (like ones that include this in their templates) can only see public properties. @api is how you make them public. Doing so means that they are available as attributes on the tag you use to include the component (hence <c-message title="Error"... is possible)
Not only that, but every @api decorated property is also 'reactive'. That is, whenever its value changes the component is re-rendered. The documentation is pretty clear on that point - and is presented as a fundamental property of a public property:
To expose a public property, decorate it with @api. Public properties define the API for a component. An owner component that uses the component in its markup can access the component’s public properties. Public properties are reactive. If the value of a reactive property changes, the component’s template rerenders any content that references the property.
Why would a public property be reactive?
Put simply, if we change the value of one of those properties in a parent component, we want the component to re-render - and it's pretty much guaranteed that we ALWAYS want the component to re-render.
For example, we may do the following:
<c-message title="{title}" severity="{severity}" >{messages}</c-message>
When the value of 'title' or 'severity' changes, we would always want the message box to re-render to show our new values. And so the framework takes care of that and makes EVERY public property of the component reactive
So that takes care of the attributes we need to pass in, what about the content?
SlotsLightning Components had facets. And they weren't intuitive. I mean they weren't complex, but they weren't in keeping with HTML - they always felt unnatural - especially in the simplest of cases.
Lightning Web Components fixes that, with slots. And in the simple case they are trivial. The documentation isn't long, and doesn't need to be.
All we need to do, in this simple case, is add <slot></slot> into our component, and the body of any tag that instantiates the component will be rendered in that slot.
Now something that's missing from the documentation, which is a fairly obvious behaviour once you see it in action, is that slots are effectively reactive.
That is, if you change the content of the tag, that content is immediately reflected in the component's rendered output.
So, in our example:
<c-message title="Error" severity="error" >{errorMessages}</c-message>
Whenever the value of 'errorMessages' changes, the slot inside the 'message' component is re-rendered to include the new content.
I admit, I had assumed that this would be the case, but I didn't immediately realise that it was an assumption. So I thought it was worth calling out
GettersThe final part of the example that I want to explain is the use of the 'getter':
get classes() {
return this.severity + ' uiMessage';
}
What we're doing here is building a list of CSS classes for a node in the component that includes one of the passed in attributes plus a standard class that must be applied
The use of the getter illustrates an important difference between the behaviour of the templates in Lightning Components (LC) and Lightning Web Components (LWC), as well a reminder of the behaviour of properties.
That is, in LC we could have done the following in our template:
<div class="{!v.severity + ' uiMessage'}" role="alert" >
In LC, our replacements could include expressions, so we could build up strings in the template. In LWC, we can't do this, we can only reference properties or getters.
Not only that, but we can't build up the strings in the attribute assignment.
I.E. We can't do this:
<div class="{severity} uiMessage" role="alert" >
In LWC we don't assign properties to attributes in this way, the framework takes care of the wrapping in double quotes, escaping the strings, and other such things, so we can only assign the property, and that's it.
I.E. This is what is allowed:
<div class={severity} role="alert" >
So, if we want to assign more than just the value of 'severity' to the class attribute, we need to build that string up outside of the template.
Your first reaction might be - OK, we can create a trackable property to store it, right?
@track classes = this.severity + ' uiMessage';
But this doesn't work. You'll end up with the classes property defined as 'undefined uiMessage', and it won't change. Why is that?
Well, it's tempting to think that 'track' and 'api' mean that Javascript will re-run when things change, but that's not what they do - nor what the documentation says they'll do
Rather, if a property is reactive it means that the component will be re-rendered when the property changes. That says nothing about running Javascript.
So when we look at the above, what happens is the property 'classes' is set when the Javascript object is constructed. At this point the property 'severity' is undefined. When the 'severity' is updated via the attribute, the component is re-rendered and the 'classes' property is re-injected into the template, but the Javascript that sets the classes property is not re-run - that is only executed when the object is instantiated.
So, instead of setting the 'classes' property directly, we set up a getter for it:
Javascript component
get classes() {
return this.severity + ' uiMessage';
}
Now, when the 'severity' property changes, the 'classes' property is re-injected. In order to get the value for 'classes', the getter is executed - this is the only way the property can be retrieved. In doing so, the string concatenation is re-evaluated and the new value is retrieved.
SummaryNone of the concepts here are particularly difficult, or really that earth shattering, but building even the simplest of re-usable components starts to shed some light on what the parts do any why.
The framework has been very thoughtfully put together, and some of the techniques will be different to what people are used to, having cut their Javascript teeth with Lightning Components, but the changes are for very good reasons. An example like this really shows how those changes make for simple components.
Lightning Web Components - The subtleties of Tracking and Wiring
Following on from yesterday's investigations into the behaviour of '@track', and its effect on untracked properties, I figured I should expand my remit to '@wire'.
My main reason for that was the statement in the documentation:
This comment relates to code along the lines of:
@api recordId;
@wire(getRecord, { recordId: '$recordId', fields })
contact;
The bit that really piqued my interest was 'The property is reactive'.
In all the examples, it looked like the property being referenced was always tracked or set as an api field (the latter implying the former)
That's the case in the above example - the property passed as a parameter 'recordId' as referenced by '$recordId' is defined as an api field, and is therefore tracked.
There's also that recurring point 'the component rerenders', which we saw in my previous post. Recalling that, it essentially meant that a change to a tracked property caused all untracked properties in the same component (and I am assured, but yet to prove, child components) to be re-rendered
So, what is the actual behaviour in this case? Are the implications the same?
Tracked property used as a parameter to a wired property
The natural first example is much like the one exampled above. We have an Apex method that takes a parameter. We wire that parameter to a javascript property, and reference a single parameter.
Javascript component
@track searchString;
@wire(searchContacts, { searchString: '$searchString' })
contacts;
When the searchString property is updated:
- The searchString's change causes the wired function to be re-evaluated.
- Therefore the value of 'contacts' changes.
- The component treats contacts as if it was tracked and causes the component to be re-rendered.
OK. That's pretty decent, and what the documentation says will happen. No surprises there!
But then I got to thinking: So, what happens if the property isn't tracked?
My first guess was that maybe the Apex method wouldn't get re-executed, but I wasn't entirely convinced - after all, the documentation only really makes a big deal about rendering. So I changed the code and tried again...
Javascript component
searchString;
@wire(searchContacts, { searchString: '$searchString' })
contacts;
This time, when the searchString property is updated:
- The searchString's change causes the wired function to be re-evaluated.
- Therefore the value of 'contacts' changes.
- The component treats contacts as if it was tracked and causes the component to be re-rendered.
Erm. OK. It's the same. Hmmm....
Looking at what's actually going on, that does make sense. It's the '$searchString' reference that tells the framework that searchString is reactive, as respect to the wiring, so it's that causing the Apex to re-execute. And once the method is re-evaluated, the value of 'contacts' changes, and that causes the component to re-render.
That go me to thinking - so what does the '@track' do against the searchString in the original example. Experience tells me that these things generally will have an effect.
So I added something else to my example...
renderedCallback() {
console.log( 'renderedCallback was called' );
}
This hooks into the component's lifecycle, as described here, and will tell us when the component gets re-rendered.
It turns out that my initial understanding of the first example was slightly wrong, though not in a way that would generally have much of an impact.
That is, if the parameter is tracked, you end up with the following:
- The searchString's change causes:
- The component to be re-rendered (since it is tracked).
- The wired function to be re-evaluated (since it is referenced as a '$' parameter).
- The execution of the wired function causes the value of 'contacts' to change.
- The component treats contacts as if it was tracked and causes the component to be re-rendered.
The result is that the component is re-rendered twice!
And sure enough, if you take the tracking off the searchString parameter, the component is only re-rendered once.
So, this does reinforce another point that the documentation makes here:
Personally, I think I'd call out this particular behaviour and remind people - you don't need to 'track' a property in order for the wired method to re-execute - but maybe it's just my lack of attention that missed that little point.
And I'd also state that you should only track a property that is being used as a reactive wired parameter if you need the property to be re-rendered before the Apex you are calling returns with its result.
Lightning Web Components - the art of Tracking
On Friday I wrote an excited blog post on the introduction of Lightning Web Components.
One of the things I mentioned was that I didn't quite get how '@track' was supposed to work
Since then, I've found the documentation (add /docs/component-library/documentation/lwc/lwc.get_started_introduction to the end of your Salesforce URL), read it, got confused by my memory, wrote a test-rig and looked at it all in depth.
On the surface, it's simple, but there are some complications, which were the cause of my confusion...
In the documentation it says the following:
OK, nice and simple. So what does that mean?
A single untracked propertyTesting with a simple component that contains a single property that is not tracked I found the following behaviour:
Javascript componentTemplate
export default class TrackExperiments extends LightningElement {
primitiveUntracked;
handlePrimitiveUntrackChanged( event ) {
this.primitiveUntracked = event.target.value;
}
}
<p>The value is: {primitiveUntracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveUntracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text"
value={primitiveUntracked}
onchange={handlePrimitiveUntrackChanged}></lightning-input>
- Changing the value defined in either of the inputs will not result in the rendered value changing.
- Changing the value defined in the input without the onchange handler will not change the Javascript property.
- Changing the value defined in the input that does have an onchange handler will update the Javascript property.
So, fairly straightforward, and maybe in-line with what you would expect:
- Untracked properties are not re-rendered in the UI when they change
- Untracked properties that are bound to inputs are not automatically updated when the input changes unless there is an onchange handler defined that updates the Javascript property
Testing with a simple component that contains a single property that is tracked, I found the following behaviour:
Javascript componentTemplate
export default class TrackExperiments extends LightningElement {
@track primitiveTracked;
handlePrimitiveTrackChanged( event ) {
this.primitiveTracked = event.target.value;
}
}
<p>The value is: {primitiveTracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveTracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text" value={primitiveTracked}
onchange={handlePrimitiveTrackChanged}></lightning-input>
- Whenever the value of the Javascript property changes, it is reflected on the outputted page.
- Changing the value defined in the input without the onchange handler will not change the Javascript property, and so the rendered property does not change.
- Changing the value defined in the input that does have an onchange handler will update the Javascript property, and both the rendered property and the value in the other input is updated.
OK, in my opinion, slightly odd - I was expecting the input binding to be two-way - that's not the case:
Lightning Web Components contain 1-way binding.
Still, it may have surprised me, but it's easy to remember, and no big deal once you know that's the behaviour.
- Tracked properties are re-rendered in the UI when they change.
- Tracked properties that are bound to inputs are not automatically updated when the input changes unless there is an onchange handler defined that updates the Javascript property.
- Inputs that are bound to tracked properties are automatically updated when the underlying property is updated.
Template
export default class TrackExperiments extends LightningElement {
@track primitiveTracked;
primitiveUntracked;
handlePrimitiveTrackChanged( event ) {
this.primitiveTracked = event.target.value;
}
handlePrimitiveUntrackChanged( event ) {
this.primitiveUntracked = event.target.value;
}
}
<p>The value is: {primitiveTracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveTracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text" value={primitiveTracked}
onchange={handlePrimitiveTrackChanged}></lightning-input>
<p>The value is: {primitiveUntracked}</p>
<lightning-input label="Input without an on-change handler"
type="Text"
value={primitiveUntracked}></lightning-input>
<lightning-input label="Input with an on-change handler"
type="Text"
value={primitiveUntracked}
onchange={handlePrimitiveUntrackChanged}></lightning-input>
Now things start to get a little surprising.
- Changing the inputs related to the tracked property works as described above, as if it was the only property on page.
- Then, changing the inputs related to the untracked property as described above.
- However, changing the tracked property inputs after changing the untracked input, causes both the tracked and untracked properties to be re-rendered.
The last point really took me by surprise - I was expecting that only the tracked property would be updated. This led me to go back to the documentation again and re-read it.
That is (bold added for emphasis):
Yep, I think I'm reading that right, and it looks like it's reflected in the behaviour I see (without delving too deep into it). When you change the value of a tracked property, the whole component re-renders. This means that any untracked properties will also be re-rendered.
I've put together a more complete illustration of that, which you can find in this git repo.
Lightning Web Components - the dawn of (another) new era
Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’.
The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com.
From that we can then get to a series of examples here.
(Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above)
Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression?
WOW. Salesforce UI development has grown up.
Salesforce aren’t lying when they’ve said that they’re trying to bring the development toolset up to the modern standards.
We get imports, what look like annotations and decorators, and there’s even mention of Promises. Maybe there’s some legs in this…
It’s easy to dismiss this as ‘Oh no, yet another change’, but the thing is - the rest of industry develops and improves its toolset - why shouldn’t Salesforce?
The only way to keep the product on point IS to develop the frameworks, replace the technology, upgrade, move on. If you don’t do that then the whole Salesforce Ecosystem starts to stagnate.
Or to put it another way - in every other part of the developer community, learning from what was built yesterday and evolving is seen as a necessity. It’s good to see Salesforce trying to keep up.
So what are the big things that I’ve spotted immediately?
import is supported, and that makes things clearerImport is a massive addition to Javascript that natively allows us to define the relationships between javascript files within javascript, rather than at the HTML level.
Essentially, this replaces the use of most ‘script’ tags in traditional Javascript development.
For Lightning Web Components,we use this to bring in capabilities from the framework, as well as static resources.
E.g. Importing modules from the Lightning Web Components framework:
import { LightningElement, track } from 'lwc';
Importing from Static Resources:
import { loadScript } from 'lightning/platformResourceLoader’;
import chartjs from '@salesforce/resourceUrl/chart';
What this has allowed Salesforce to do is to split up the framework into smaller components. If you don’t need to access Apex from your web component, then you don’t need to import the part of the framework that enables that capability.
This *should* make individual components much more lightweight and targeted - only including the capabilities that are required, when they are required.
Getting data on screen is simplerAny javascript property is visible to the HTML template.
E.g.
export default class WebAppComponentByMe extends LightningElement {
contacts;
We can then render this property in the HTML with {contacts} (none of those attributes to define and none of those pesky v dot things to forget).
Much neater, much more concise.
We track propertiesLooking at the examples, my assumption was that if we want to perform actions when a property is changed, we mark the property trackable using the @track decorator.
For example:
export default class WebAppComponentByMe extends LightningElement {
@track contacts;
I was thinking that, at this point, anything that references this property (on page, or in Javascript) will be notified whenever that property changes.
However, at this point I can't really tell what the difference is between tracked and non-tracked properties - a mystery for another day
Wiring up to Apex is much simplerOne of the big criticisms of Lightning Components that I always had was the amount of code you need to write in order to call an Apex method. OK, so you have force:recordData for a lot of situations, but there are many times when only an Apex method will do.
In Web Components, this is much simpler.
In order to connect to Apex, we import the ‘wire’ module, and then import functions into our javascript
import { LightningElement, wire } from 'lwc';
import getContactList from '@salesforce/apex/ContactController.getContactList';
The first line imports the wire capabilities from the framework, the second then imports the Apex method as a javascript method, therefore making it available to the component.
We can then connect a javascript property up to the method using the wire decorator:
@wire(getContactList) contacts;
Or wire up a javascript method:
@wire(getContactList)
wiredContacts({ error, data }) {
if (data) {
this.contacts = data;
} else if (error) {
this.error = error;
}
}
When the component is initialised, the getContactList method will be executed.
If the method has parameters, that’s also very simple (E.g. wiring to a property):
@wire(getContactList, { searchKey: '$searchKey' })
contacts;
Changing the value of a property causes Apex to re-execute
Having wired up a property as a parameter to an Apex bound Javascript function, any changes to that property will cause the function to be re-executed
For example, if we:
searchKey = '';
@wire(findContacts, { searchKey: '$searchKey' })
contacts;
Whenever the searchKey property changes, the Apex method imported as ‘findContacts’ will be executed and the contacts property is updated.
Thankfully, we can control when that property changes, as it looks like changing the property in the UI does not automatically fire a change the property on the Javascript object. In order to do that, we need to change the property directly.
E.g. Let’s say we extend the previous example and there’s an input that is bound to the property, and there’s an onchange event defined:
And the handler does the following:
handleKeyChange(event) {
this.searchKey = event.target.value;
}
This will cause the findContacts method to fire whenever the value in the input is changed.
Note that it is the assignment to this.searchKey that causes the event to fire - it looks like the binding from the HTML is 1-way. I admit that I need to investigate this further.
Events do not require configuration to be implementedEvents work in a completely different way - but then that’s not a problem - Application and Component events were different enough to cause headaches previously. The model is actually much simpler.
The example in the above referenced repository to look at is ‘PubSub’.
It’s much too involved to into detail here, but the result is that you need to:
- Implement a Component that acts as the messenger (implementing registerListener, unregisterListener and fireEvent)
- Any component that wants to fire an event, or listen for an event will import that component to do so, firing events or registering listeners.
This would seem to imply that (at least a certain amount of) state within components is shared - looking like those defined with 'const'
Whatever the precise nature of the implementation, a pure Javascript solution is surely one that anyone involved in OO development will welcome.
I suspect that, in a later release, this will become a standard component.
SummarySome people will be thinking "Man, glad I didn’t migrate from Classic / Visualforce to Lightning Experience / Components - maybe I’ll just wait a little longer for it all to settle down”.
You’re wrong - it won’t settle, it’ll continue to evolve and the technologies will be continually replaced by new ones. Eventually, the jump from what you have to where you need to get to will be so huge that you’ll find it incredibly hard. There’s a reason why Salesforce pushes out 3 releases a year, whether you want it or not, these technology jumps are just the same. The more you put it off, the more painful it’ll be.
The change from Lightning Components to Lightning Web Components is vast - a lot more than a single 3 letter word would have you suspect. The only real similarities between the two frameworks that I’ve seen up to now are:
- Curlies are used to bind things
- The Base Lightning Components are the same
- You need to know Javascript
Other than that, they’re a world apart.
Also, I couldn’t find any real documentation - only examples - although those examples are a pretty comprehensive starting point.
Now, obviously it's early days - we're in pre-release right now, but what I've seen gives me great hope for the framework, it's a significant step forward and I can't wait to see what happens next. I wonder if a Unit Testing framework might follow (I can but hope)
You could wait, but hey, really, what are you waiting for? Come on, jump in. The change is exciting...
LinkedIn, and the GDPR age
- The person is no longer in the organisation’s system, they were found on LinkedIn.
- The person was not sent an e-mail, or phoned, they were messaged within LinkedIn.
- The person has consented to have their data held by LinkedIn for the expressed purpose of being contacted by potential recruiters via the platform.
Things I still believe in
Anyway - here's that list - slightly updated for 2018... it you've seen my talk on Unit Testing recently, you might recognise a few entries.
(opinions are my own, yada yada yada)
- It's easier to re-build a system from its tests than to re-build the tests from their system.
- You can measure code complexity, adherence to standards and test coverage; you can't measure quality of design.
- Formal and flexible are not mutually exclusive.
- The tests should pass, first time, every time (unless you're changing them or the code).
- Test code is production code and it deserves the same level of care.
- Prototypes should always be thrown away.
- Documentation is good, self documenting code is better, code that doesn't need documentation is best.
- If you're getting bogged down in the process then the process is wrong.
- Agility without structure is just hacking.
- Pair programming allows good practices to spread.
- Pair programming allows bad practices to spread.
- Team leaders should be inside the team, not outside it.
- Project Managers are there to facilitate the practice of developing software, not to control it.
- Your customers are not idiots; they always know their business far better than you ever will.
- A long list of referrals for a piece of software does not increase the chances of it being right for you, and shouldn't be considered when evaluating it.
- You can't solve a problem until you know what the problem is. You can't answer a question until the question's been asked.
- Software development is not complex by accident, it's complex by essence.
- Always is never right, and never is always wrong.
- Interesting is not the same as useful.
- Clever is not the same as right.
- The simplest thing that will work is not always the same as the easiest thing that will work.
- It's easier to make readable code correct than it is to make clever code readable.
- If you can't read your tests, then you can't read your documentation.
- There's no better specification document than the customer's voice.
- You can't make your brain bigger, so make your code simpler.
- Sometimes multiple exit points are OK. The same is not true of multiple entry points.
- Collective responsibility means that everyone involved is individually responsible for everything.
- Sometimes it's complex because it needs to be; but you should never be afraid to double check.
- If every time you step forward you get shot down you're fighting for the wrong army.
- If you're always learning you're never bored.
- There are no such things as "Best Practices". Every practice can be improved upon.
- Nothing is exempt from testing. Not even database upgrades or declarative tools.
- It's not enough to collect data, you need to analyse, understand and act upon that data once you have it.
- A long code freeze means a broken process.
- A test hasn't passed until it has failed.
- A test that can't fail isn't a test.
- If you give someone a job, you can't guarantee they'll do it well; If you give someone two jobs you can guarantee they'll do both badly
- Every meeting should start with a statement on its purpose and context, even if everyone in the meeting already knows.
Promises and Lightning Components
So, what are promises?
In short, it’s a syntax that allows you to specify callbacks that should execute when a function either ’succeeds’ or ‘fails’ (is resolved, or rejected, in Promise terminology).
For many, they're a way of implementing callbacks in a way that makes a little more sense syntactically, but for others it's a new way of looking at how asynchronous code can be structured that reduces the dependancies between them and provides you with some pretty clever mechanisms.
However, this article isn’t about what promises are, but rather:
How can Promises be used in Lightning Components, and why you would want to?
If you want some in depth info on what they are, the best introduction I’ve found is this article on developers.google.com
In addition, Salesforce have provided some very limited documentation on how to use them in Lightning, here.
Whilst the documentations's inclusion can give us hope (Salesforce knows what Promises are and expect them to be used), the documentation itself is pretty slim and doesn’t really go into any depth on when you would use them.
When to use Promises
Promises are the prime candidate for use when executing anything that is asynchronous, and there’s an argument to say that any asynchronous Javascript that you write should return a Promise.
For Lightning Components, the most common example is probably when calling Apex.
The standard pattern for Apex would be something along the lines of:
getData : function( component ) {
let action = component.get(“c.getData");
action.setCallback(this, function(response) {
let state = response.getState();
if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
}
else if (state === "INCOMPLETE") {
// do your incomplete thing
}
else if (state === "ERROR") {
// do your error thing
}
});
$A.enqueueAction(action);
}
In order to utilise Promises in a such a function you would:
- Ensure the function returned a Promise object
- Call 'resolve' or 'reject' based on whether the function was successful
getData : function( component ) {
return new Promise( $A.getCallback(
( resolve, reject ) => {
let action = component.get(“c.getData");
action.setCallback(this, function(response) {
let state = response.getState();
if (state === "SUCCESS") {
let result = response.getReturnValue();
// do your success thing
resolve();
}
else if (state === "INCOMPLETE") {
// do your incomplete thing
reject();
}
else if (state === "ERROR") {
// do your error thing
reject();
}
});
$A.enqueueAction(action);
});
}
You would then call the helper method in the same way as usual
doInit : function( component, event, helper ) {
helper.getData();
}
So, what are we doing here?
We have updated the helper function so that it now returns a Promise that is constructed with a new function that has two parameters 'resolve' and 'reject'. When the function is called, the Promise is returned and the function that we passed in is immediately executed.
When our function reaches its notional 'success' state (inside the 'state == "SUCCESS" section), we call the 'resolve' function that is passed in.
Similarly, when we get to an error condition, we call 'reject'.
In this simple case, you'll find it hard to see where 'resolve' and 'reject' are defined - because they're not. In this case the Promise will create an empty function for you and the Promise will essentially operate as if it wasn't there at all. The functionality hasn't changed.
So the obvious question is.. Why?
What does a Promise give you in such a situation?
Well, if all you are doing it calling a single function that has no dependant children, then nothing. But let's say that you wanted to call "getConfiguration", which called some Apex, and then *only once that was complete* you called "getData".
Without Promises, you'd have 2 obvious solutions:
- Call "getData" from the 'Success' path of "getConfiguration".
- Pass "getData" in as a callback on "getConfiguration" and call the callback in the 'Success' path of "getConfiguration"
That is - in the first we introduce an explicit dependancy between getConfiguration and getData. Ideally, this would not be expressed in getConfiguration, but rather in the doInit (or a helper function called by doInit). It is *that* function which decides that the dependancy is important.
The second solution *looks* much better (and is), but it's still not quite right. We now have an extra parameter on getConfiguration for the callback. We *should* also have another callback for the failure path - otherwise we are expressing that only success has a further dependancy, which is a partial leaking of knowledge.
Fulfilling your Promise - resolve and reject
When we introduce Promises, we introduce the notion of 'then'. That is, when we 'call' the Promise, we are able to state that something should happen on 'resolve' (success) or 'reject' (failure), and we do it from *outside* the called function.
Or, to put it another way, 'then' allows us to define the functions 'resolve' and 'reject' that will get passed into our Promise's function when it is constructed.
E.g.
We can pass a single function into 'then', and this will be the 'resolve' function that gets called on success.
doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) } );
}
Or, if we wanted a failure path that resulted in us calling 'helper.setError', we would pass a second function, which will become the 'reject' function.
doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( () => { helper.getData( component ) }
, () => { helper.setError( component ) } );
}
Now, this solution isn't vastly different to passing the two functions directly into the helper function. E.g. like this:
doInit : function( component, event, helper ) {
helper.getConfiguration( component
, () => { helper.getData( component ) }
, () => { helper.setError( component ) } );
}
And whilst I might say that I personally don't like the act of passing in the two callbacks directly into the function, personal dislike is probably not a good enough reason to use a new language feature in a business critical system.
So is there a better reason for doing it?
Promising everything, or just something
Thankfully, Promises are more than just a mechanism for callbacks, they are a generic mechanism for *guaranteeing* that 'settled' (fulfilled or rejected) Promises result in a specified behaviour occurring once certain states occur.
When using a simple Promise, we are simply saying that the behaviour should be that the 'resolve' or 'reject' functions get called. But that's not the only option
. For example, we also have: Promise.allWill 'resolve' only when *all* the passed in Promises resolve, and will 'reject' if and when *any* of the Promises reject.Promise.raceWill 'resolve' or 'reject' when the first Promise to respond comes back with a 'resolve' or 'reject'. Once we add that to the mix, we can do something a little clever...
How about having the component load with a 'loading spinner' that is only switched off when all three calls to Apex respond with success:
doInit : function( component, event, helper ) {
Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } );
}
Or even better - how about we call getConfiguration, then once that’s done we call each of the getData functions, and only when all three of those are finished do we set the flag:
doInit : function( component, event, helper ) {
helper.getConfiguration( component )
.then( Promise.all( [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ] )
.then( () => { helper.setIsLoaded( component ) } )
);
}
Or how about - we normally call three functions to get data, unless a flag is set, at which point we want to call a fourth function, and only when all four are complete do we set the flag:
doInit : function( component, event, helper ) {
let initialisations = [ helper.getDataOne( component )
, helper.getDataTwo( component )
, helper.getDataThree( component ) ];
if ( component.get( 'v.runGetDataFour' ) {
initialisations.push( helper.getDataFour( component ) );
}
helper.getConfiguration( component )
.then( Promise.all( initialisations )
.then( () => { helper.setIsLoaded( component ) } )
);
}
Now, just for a second, think about how you would do that without Promises...
Throw it away - Why you shouldn't keep your POC
"Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.
They are crucial for ensuring that facts are gathered before some particularly risk decisions are made. Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common. They serve to answer questions.
It can be tempting, whilst answering these questions to become attached to the code that you generate.
I would strongly argue that you should almost never keep the code that you build during a POC. Certainly not to put into a production system.
I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..
Why do we set out on a proof of concept?
The purpose of a proof of concept is to (by definition):
* Prove: Demonstrate the truth or existence of something by evidence or argument.
* Concept: An idea, a plan or intention.
In most cases, the concept being proven is a technical one. For example:
* Will this language be suitable for building x?
* Can I embed x inside y and get them talking to each other?
* If I put product x on infrastructure y will it basically stand up?
They can also be functional, but the principles remain the same for both.
It's hard to imagine a proof of concept that cannot be phrased as one or more questions. In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.
The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking. If you *do* already know the answers, then the POC is of no value to you.
By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision. If that's not the case then, again, the POC is probably not of value to you.
As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.
This is quite different to what we set out to do in our normal software development process.
We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.
What process do we follow when embarking on a proof of concept?
Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.
With the main question in mind, you often follow an almost scientific process. You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.
You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up. It is an entirely exploratory process.
Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point. In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.
But that's OK. The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.
To illustrate:
Will this language be suitable for building x?
You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.
You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.
That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.
Can I embed x inside y and get them talking to each other
You will probably need to define a communication method and prove that it basically works. Get something up and running that is at least reasonably functional in the "through the middle" test case.
You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.
That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems. On top of that you need to know that you don't impact pre-existing behaviour or APIs.
If I put product x on infrastructure y will it basically stand up?
You probably need to just get the software on there and run your automated tests. Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.
You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.
That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.
Production development and Proof of Concept development is not the same
The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.
You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.
When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.
That is, you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.
Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.
That is intellectual currency, not software. The important delivery of a production build is the software that is built.
That is the fundamental difference, and why you should throw your code away.
The opportunity cost of delaying software releases
Some software has been built. It generates revenue (or reduces cost) associated with sales, but the effect is not immediate. It could be the implementation of a process change that takes a little time to bed in, or the release of a new optional extra that not everyone will want immediately.
It is expected that when it is initially released there’ll be a small effect. Over the next 6 months there will be an accelerating uptake until it reaches saturation point and levels off.
Nothing particularly unusual about that plan. It probably describes a lot of small scale software projects.
Now let’s put some numbers against that.
At saturation point it’s expected to generate / save an amount equal to 2% of the total revenue of the business. It might be an ambitious number, but it’s not unrealistic.
The business initially generates £250k a month, and experiences steady growth of around 10% a year.
What does the revenue generation of that software look like over the first 12 months?
It’s pretty easy to calculate, plugging in some percentages that reflect the uptake curve:
Period Original Business Revenue Software Revenue Generation Additional Revenue1 £250,000.00 0.2% £500.002 £252,500.00 0.5% £1,262.503 £255,025.00 1.1% £2,805.284 £257,575.25 1.6% £4,121.20 5 £260,151.00 1.9% £4,942.876 £262,752.51 2.0% £5,255.057 £265,380.04 2.0% £5,307.608 £268,033.84 2.0% £5,360.689 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.4311 £276,155.53 2.0% £5,523.1112 £278,917.09 2.0% £5,578.34 Total: £51,539.34
Or, shown on a graph:
So, here’s a question:
What is the opportunity cost of delaying the release by 2 months?
The initial thought might be that the effect isn’t that significant, as the software doesn’t generate a huge amount of cash in the first couple of months.
Modelling it, we end up with this:
Period Original Business Revenue Software Revenue Generation Additional Revenue 1 £250,000.00 £- 2 £252,500.00 £- 3 £255,025.00 0.2% £510.05 4 £257,575.25 0.5% £1,287.88 5 £260,151.00 1.1% £2,861.66 6 £262,752.51 1.6% £4,204.04 7 £265,380.04 1.9% £5,042.22 8 £268,033.84 2.0% £5,360.68 9 £270,714.18 2.0% £5,414.28 10 £273,421.32 2.0% £5,468.43 11 £276,155.53 2.0% £5,523.11 12 £278,917.09 2.0% £5,578.34 Total: £41,250.69
Let’s show that on a comparative graph, showing monthly generated revenue:
Or, even more illustrative, the total generated revenue:
By releasing 2 months later, we do not lose the first 2 months revenue – we lose the revenue roughly equivalent to P5 and P6.
Why?
When we release in P3, we don’t immediately get the P3 revenue we would have got. Instead we get something roughly equivalent to P1 (it’s slightly higher because the business generates a little more revenue overall in P3 than it did in P1).
This trend continues in P3 through to P8, where the late release finally reaches saturation point (2 periods later than the early release – of course).
Throughout the whole of P1 to P7 the late release has an opportunity cost associated. That opportunity cost is never recovered later in the software’s lifespan as the revenue / cost we could have generated the effect from is gone.
If the business was not growing, this would amount to a total equal to the last 2 periods of the year.
In our specific example, the total cost of delaying the release for 2 months amounts to 20% of the original expected revenue generation for the software project in the first year.
And this opportunity cost is solely related to the way in which the revenue will be generated; the rate at which the uptake comes in over the first 6 months.
Or to put it another way – in this example, if you were to increase or decrease the revenue of the business or the percentage generation at which you reach saturation point the cost will always be 20%.
So, when you’re thinking of delaying the release of software it’s probably worth taking a look, modelling your expected uptake and revenue generation to calculate just how much that will cost you…
How do I type e acute (é) on Windows 8
I keep on forgetting how to type é on Windows 8 (I used to CTRL+ALT+e, but that's now often reserved for the Euro symbol)
I then tend to run a search on Google, and end up being pointed towards 8 year old answers that point you to character map, options in old version of word, or the old way of typing the extended ASCII character code.
They all suck.
And then I remember - it's easy.
You start by pressing a CTRL + a key that represent the accent, then type the letter you want accented.
For example, CTRL + ' followed by e gives you é.
Brilliant!
The great thing about using this technique is that the characters you use (dead letters) are representative of the accents you want to type. This makes them much easier to remember than the seemingly random character codes
Here are the ones I know about:
KeystrokesAccent typeExamplesCTRL + 'acuteéCTRL + `graveèCTRL + SHIFT + 6 / CTRL + ^circumflexêCTRL + ,cedillaçCTRL + ~perispomeneõCTRL + SHIFT + 7 / CTRL + &Diphthongs / others a = æ, o = œ, s = ß It doesn't quite work with every app (Blogger on Chrome, for example), but it certainly covers Office 2013, including both Outlook and Word.Gamification to level 80
- If you complete entries on them you get experience points and coins.
- If you fail to do entries them you lose hit points.
- Phone mother (positive)
- Get a takeaway (negative)
- Empty the bins (both - positive if you do it, negative if your partner does it)
DailiesSuffering from a bit of a misnomer, dailies are repetitive tasks with some form of weekly schedule. Things that you want to do regularly, and on particular days. You can set a task to be required every day, only every Tuesday, or anything between.
Whilst un-actioned habits are benign, if you don't tick off a daily then you get hurt. With habits you're gently encouraged to complete them as often as possible. Dailies come with a big stick..
- Go to the gym
- Do an uninterrupted hour of productive work
TodosThe classic task. The one off thing that you've got to do, and once its done you can cross it off and move on to the next thing.
Examples might be:
- Write a blog post about HabitRPG
- Book a holiday cottage in Wales
Other bits
Conclusion
As I've got further through I have found that I am more inclined to set longer target todos. They hurt you less as you have armour, and the longer you leave them the more XP you get. It sounds like cheating, but its not. It's simply that I've matured the way in which I use my task manager.
It differs from sites like Strava, in that this is not primarily a competitive site - it needs to focus on the individual as it is trivially easy to "cheat". You can add arbitrary tasks and complete them immediately - though it really defeats the purpose. It relies on you entering a contract with yourself to use the site productively. For that reason, any fundamental introduction to the site of competitiveness is flawed.
However, there is the concept of "challenges" - the idea that you can set goals, assign a prize and then invite people to compete. It works, but only on the principle that people entering the challenges can be trusted.
BBC and web accelerators don't mix
- The BBC thinks I'm not in the UK when I am.
- iPlayer thinks I'm not in the UK.
- iPlayer won't serve me any content.
- BBC has adverts on it.
And many other variations on the theme, I decided to go back to the BBC site and give it one last go.
- Settings
- Advanced
- Bandwidth management
- Reduce data usage
- Then in the tip right corner, flick the switch to off.
That simple.
Redundancies should come with a pay rise
As far as I can see, there is only one reason why a company should ever make redundancies.
Due to some unforseen circumstances the business has become larger than the market conditions can support and it needs to shrink in order to bring it back in line.
Every other reason is simply a minor variation or a consequence of that underlying reason.
Therefore, if the motivation is clear, and the matter dealt with successfully, then once the redundancies are over the business should be "right sized" (we've all heard that term before), and it should be able to carry on operating with the same values, practices and approach that it did prior to the redundancies.
If the business can't, then I would suggest is that it is not the right size for the market conditions and therefore the job isn't complete.
OK, there may be some caveats to that, but to my mind this reasoning is sound.
In detail:
When you reduce the headcount of the business you look for the essential positions in the company, keep those, and get rid of the rest.
Once the redundancies are finished you should be left with only the positions you need to keep in order to operate successfully.
It's tempting to think that you should have a recruitment freeze and not back-fill positions when people leave, but if someone leaves and you don't need to replace them, then that means you didn't need that position, in which case you should have made it redundant.
Not back-filling positions is effectively the same as allowing your employees to choose who goes based on their personal motives rather than force the business heads to choose based on the business motives. This doesn't make business sense.
So, you need to be decisive and cut as far as you can go without limiting your ability to operate within the current market conditions.
To add to that, recruitment is expensive. If you're in a highly skilled market then you'll likely use an agency. They can easily charge 20% of a salary for a perm head. On top of that you have the cost of bringing someone up to speed, at a time when you're running at the minimum size your market will allow. Plus there's the cost of inefficiency during the onboarding period as well as the increased chance of the remaining overstretched employees leaving as well.
The upshot is that you really can't afford to have people leave, it's so expensive that it jeopardises the extremely hard work you did when you made the redundancies.
There's a theory I often hear that you can't have contractors working when the perm heads are being marched out. That's a perfectly valid argument if the perm head would be of long term value to you and can do the job that the contract head can do. But if you need the contractor to do a job that only lasts another 3 months and that person is by far the best or only person you have for the job, then the argument just doesn't stand up. Get rid of the perm position now and use the contractor, it'll be cheaper and more beneficial to the business in the long run.
OK, that's maybe not the most sentimental of arguments, but why would you worry about hurting the feelings of people who no longer work for you, at the expense of those that still do?
It may even be worse than that - you could be jeopardising the jobs of others that remain by not operating in the most efficient and effective way possible.
Another prime example is maternity cover. If you need the person on maternity to come back to work then you almost certainly need the person covering them. If it's early in the maternity leave then you'll have a long period with limited staff, if it's late in the leave then you only need the temporary cover for a short period more. Either way you're overstretching the perm staff left to cover them and risking having them leave.
Finally, there's the motivation to ensure that the business that remains is running as lean as possible. That costs are as low as they could be. The temptation is to cut the training and entertainments budget to minimum and pull back on the benefits package.
As soon as you do this you fundamentally change the character of the business. If you always prided yourself on being at the forefront of training then you attracted and kept staff who valued that. If you always had an open tab on a Friday night at the local bar, then you attracted people who valued that. Whatever it is that you are cutting back on, you are saying to people who valued it that "we no longer want to be as attractive to you as we once were; we do not value you quite as much as we did". This might not be your intention, but it is the message your staff will hear.
I put it to you that the cheapest way to reduce costs after redundancies is to be completely honest to the staff you keep. Say it was difficult, say that you're running at minimum and that a lot will be expected of whoever's left. But tell them that they're still here because they're the best of the company and they are vital to the company's success. Let them know that the contractors you've kept are there because they're the best people for those positions to ensure that the company succeeds. Tell them that the contractors will be gone the moment they're not generating value or when a perm head would be more appropriate. Make it clear that the company is now at the right size and the last thing you want is for people to leave, because you value them and that if they left it would damage your ability to do business.
Then give them a pay rise and a party to prove it.
Agile and UX can mix
- A coherent view and design that bind the disparate elements together into a complete system.
- Expert techniques and knowledge that allow you to discover the right problems to fix with greater accuracy.
- Design practices and investigative processes that allow you to test potential solutions earlier in the project (i.e. with less cost) than would otherwise be possible, helping ensure you do the right things at the right time.
- Extremely expressive communication tools that allow you to describe the system you're going to deliver as that understanding changes through the project.
Remote workforces and the 12 golden questions
If your team have no time to do anything other than the day's work then they have no opportunity to develop.
Second - it can be difficult to sot someone who thinks their team-mates are not committed to quality work.
Third - it can be difficult to ensure that everyone knows what quality work their team-mates are doing.
Measuring the time left
That is if, like me, you work in what might be termed a semi-agile*1 arena then you also need some hard and fast numbers. What I am going to talk about is a method for working out the development time left on a project that I find to be pretty accurate. I'm sure that there are areas that can be finessed, but this is a simple calculation that we perform every few days that gives us a good idea of where we are.
The basis.It starts with certain assumptions:
You are using stories.OK, so they don't actually have to be called stories, but you need to have split the planned functionality into small chunks of manageable and reasonably like sized work.
Having done that you need to have a practice of working on each chunk until its finished before moving on to the next, and have a customer team test and accept or sign off that work soon after the developers have built it.
You need that so that you uncover your bugs, or unknown work as early as possible, so you can account for them in your numbers.
This is so you can use some simple rules for guessing the size of the work that your customer team has added but your developers have not yet estimated.
If you don't have this then you cant do any simple mathematics on the numbers you have and it'll make your life much harder.
You need this early in the project so that you have a reasonable amount of data to work with
Your developers produce consistent estimates.
This tends to be the case if the same group of developers estimate all the stories that they all involve making changes to the same system. If a project involves multiple teams or systems then you may want to split them into sub projects for the means of this calculation.
It doesn't need to be a detailed analysis of what time is spent doing what, but a simple total of how much time has been spent by the developers, split between the time spent on stories and that on fixing defects.
If you don't do this, even on the most agile of projects, then your bosses and customer team don't have the real data that they need to make the right decisions.
You, and they, are walking a fine line to negligent
The calculation.The calculation is simple, and based on the following premises:
- If your previous estimates were out, they will continue to be out by the same amount for the whole of the project.
- The level of defects created by the developers and found by the customer team will remain constant through the whole project.
- Defects need to be accounted for in the time remaining.
- Un-estimated stories will be of a similar size to previously completed work.
Using these we can work out:
totalTimeSpent = The total time spent on all development work (including defects).
totalTimeSpentOnDefects = The total time spent by developers investigating and fixing defects.
numberOfStoriesCompleted = The count of the number of stories that the development team have completed and released to the customer.
storiesCompletedEstimate = The sum of the original estimates against the stories that have been completed and released to the customer.
totalEstimatedWork = The sum of the developers' estimates against stories and defects that are yet to do.
numberOfStoriesCompleted = The count of number of a stories that have been completed by the development team and released to the customer.
numberOfUnEstimatedStories = The count of the number of stories that have been raised by the customer but not yet estimated by the development team.
numberOfUnEstimatedDefects = The count of the number of defects that have been found by the customer but not yet estimated by the development team.
Time remaining on work that has been estimated by the development team.For this we use a simple calculation on the previous accuracy of the estimates.
This includes taking into account the defects that will be found, and need to be fixed against the new feunctionality that will be built.
Time remaining on work that has not been estimated by the development team.In order to calculate this, we rely on the assumptions that the customer team have got used to writing stories of about the same size every time.
estimateAccuracy = totalTimeSpent / storiesCompletedEstimate
predictedTimeRemainingOnEstimatedWork = ( totalEstimatedWork * estimateAccuracy )
You may need to get a couple of developers to help with this by splitting things up with the customer team as they are creating them. I'd be wary of getting then to estimate work though.
Total predicted time remainingThe remaining calculation is then simple, it's the sum of the above parts.
averageStoryCost = totalTimeSpent / numberOfStoriesCompleted
predictedTimeRemainingOnUnEstimatedStories = numberOfUnEstimatedStories * averageStoryCost
averageDefectCost = totalTimeSpentOnDefects / numberOfStoriesCompleted
predictedTimeRemainingOnUnEstimatedDefects = numberOfUnEstimatedDefects * averageDefectCost
We've assessed the accuracy of previous estimates, put in an allocation for bugs not yet found, and assigned a best guess estimate against things the development team haven't yet put their own estimate.
The limitationsI find this calculation works well, as long as you understand its limitations.
totalPredictedTimeRemaining = predictedTimeRemainingOnEstimatedWork + predictedTimeRemainingOnUnEstimatedStories + predictedTimeRemainingOnUnEstimatedDefects
I hope to present some data in this blog very soon, as we already have some empirical evidence that it works.
Admittedly, for the first 20% or so of the project the numbers coming out of thus will fluctuate quite a bit. This is because there isn't enough 'yesterday's weather' data to make it the estimate accuracy calculation meaningful. The odd unexpectedly easy (or hard) story can have a bit effect on the numbers.
Also, if your testing and accepting of stories lags far behind your development or if you don't fix your bugs first, you will under estimate the number of bugs in the system. However, if you know these things you can react to them as you go along.
Further WorkI am not particularly inclined to make changes to this calculation, as the assumptions and limitations are perfectly appropriate for the teams that I work with. For other teams this may not be the case, and I might suggest some slight alterations if you think they'd work for you.
Estimating number of defects not yet found.
Pleasing line
$oFilter = Filter::attribute('player_id')->isEqualTo('1')->andAttribute('fixture_id')->isEqualTo('2');