Application Envisioning idea
Examples from three knowledge work domains:
(Illustrated above) An architect runs a tolerance checking function in her building modeling application to check whether one section of a design meets a specific building code. Where the automated function discovers a potential violation, it gives her the opportunity to ignore the finding based on her own interpretation of the particular code’s description.
Knowledge workers may place a high value on how their computing tools automatically perform certain complex actions (E3, E4). But rather than experiencing these tools as yet more technology that “runs itself,” workers may want some measure of control over automations (A4, D2), especially when they can influence the character of entire tasks or larger activities (A5, C8, K2, K4).
A scientist likes that the latest version of her analysis application allows her to intervene in real time when she sees that automated algorithms are not producing desired outcomes. In the previous version of the same application, she could not interrupt lengthy analyses to make changes.
A financial trader turns off the automatic trading function in his trading application, which normally takes care of low value, uncontroversial transactions. Accomplishing these deals manually, when he has time, gives him a better sense of his group’s standard business.
To promote workers’ sense that they are at the locus of control, product teams can envision opportunities for users to appropriately contribute their own skills to the
initiation, steering, and completion of automated processes (C4, G1).
Over time, workers may build confidence in how an application performs and contributes to their work outcomes (K13, L1), eventually becoming comfortable enough to surrender more complete control of some actions (D4, D7). Product teams can promote these desirable end states by concepting features that could allow workers to transition through such levels of confidence at their own pace.
When product teams do not actively consider how knowledge workers might retain an internal locus of control while using computing tools that powerfully shape their practices, users may find that resulting applications stressfully and inappropriately “make decisions” or “take actions” against their intentions. Workers may believe that they are being deskilled by these computing tools (E5, D3), which can influence their decisions about whether or not to fully adopt them into their own efforts (K).
Conversely, applications can introduce “too much” control, creating unnecessary
opportunities for errors (C9, G3) and distracting users from larger goals (D1).
See also: A, C1, E, M1
Application Envisioning questions:
More specific questions for product teams to consider:
What automations are currently part of the work practices that your team is striving to mediate? What do targeted individuals think about their level of control over these technologies?
What problems currently occur due to workers feeling that they are being “controlled” or “reined in” by certain standardized artifacts and computing tools? Could these problems present opportunities for your team’s product?
What categorical classes of local needs in targeted organizations might influence workers perceptions of control and augmenting alignment?
What analogies and language might your team use to describe the relationship between user and product that you are striving to create? What implications
could this described relationship have on brand?
How might you envision automation functionalities as actionable extensions of workers’ skills, rather than distant and self operating replacements for them?
How could thinking about automation as just “another tool” in workers’ available repertoires allow your team to sketch more appropriate functionality concepts?
How might a lack of control over certain aspects of your product create deskilling barriers to its adoption and long term success?
How might desired levels of control change over time as users increasingly trust your computing tool?
What settings and options might your team envision to give targeted individuals and organizations meaningful influence over automation functionalities in the context of their local ways of working?
What interactive scenarios and behaviors might provide users with a direct and engaging sense of control over your computing tool’s actions?
How much control might be too much control? What constraints could usefully promote reductions in effort, clarified interactive experiences, reduced likelihood
of errors, and the confident creation of desired outputs?
What contexts could require automation to be highly standardized, rather than modifiable on a case by case basis at the discretion of individual workers?
Do you have enough information to usefully answer these and other envisioning questions? What additional research, problem space models, and design concepting could valuably inform your team’s application envisioning efforts?
< PREVIOUS PAGE | NEXT PAGE >
Back to top | View Table of Contents