IX. Locating and Using Ethnomethodology within Participatory Design

Ethnomethodology (EM) is the practice of observing and analysing ordinary social practices as they occur.1 The term was coined in the mid-1950s by the American sociologist, Harold Garfinkel (1917-2011), and formally set out in his Studies in Ethnomethodology in 1967. Garfinkel had been studying jury members during legal trials and was struck by their ability to adopt professional methods and terminology during the course of their deliberations.2 In his opinion, these laypeople displayed ‘culture in action’, meaning that their impressive grasp of methods of understanding, reasoning, decision-making, etc. were exclusively part of the situated activity itself – the deliberation – not separate from it.3

EM can inform system design by giving us an insight into how people actually work, as opposed to how they say they work. In line with other proponents of EM we deliberately refrained from building hypotheses and pre-defining patterns. Instead we allowed the structures and processes, to emerge from the social practices.4 We decided to first interpret the observations quite broadly. By paying particular attention to the role played by technology in ‘getting the work done’, instructions based directly on the users’ work practices were conveyed to the developers. Those instructions resulted in the first iteration of the prototype. Following that, a narrower, more traditional ethnomethodological approach perhaps, was adopted to confirm that the prototype was in fact reflecting what the users wanted. This two-step process is less resource-intensive than one where detailed EM is required all the way through but still very valuable.

1. The Sequencing of Search

The amalgamated videos described in section 2.3 were used for both the broad and narrow analyses. The aim of the former was to show routinised regular action common to all members which were then fed into the core system requirements. The latter formed part of our aspiration to find out more about the ‘sequencing of search’. Step-by-step recordings of each action with time stamps would enable us to scratch beneath the surface and reveal diversity in action. Identifying this diversity in action, alongside disruptions such as hesitations, would pinpoint where the resource was causing the individual to abandon their usual search sequence and instead adopting alternative practices.

We carried out analyses on eight excerpts from the amalgamated video/screen capture recordings. Samples from all members were included and interactions with both familiar and unfamiliar resources represented. We also analysed interactions with the final iteration of the prototype to gauge its ‘ease of use’ in comparison with resources previously looked at.

Having an overview of the whole workstation made it possible to note the difference in use of peripherals such as computer mice and keyboards. For example, some users would use their mouse extensively, whereas others relied on the keyboard and its shortcuts. Our developers found this kind of information interesting because they had to consider elements of design that in other projects had been a given and based on their usual perception of what was most ‘user-friendly’. We were also able to record workflows and better understand whether the use of pen and paper, for example, was an integral part of an individual’s research practice, i.e. a conscious choice they made, or a workaround that had emerged out of necessity because the system was falling short.

2. Findings

We aim to publish the findings of our ethnomethodology in the future, but it is nevertheless useful to include some of them to illustrate the discussion above. The first example has been chosen because it emphasises in two ways how EM can feed into user-centred design.

2.1. MS, Interactional Analysis Workshop: Critique of Resources Used in Own Research (PC laptop, uses touch pad)

01:39 – 01:44: Moves cursor to browser scroll bar on the right and clicks down arrow to scroll down the page.

01:45 – 01:51: Stops scrolling and points at screen with a pen.

01:52: Puts pen down.

01:53 – 02:01: Resumes scrolling by using scroll bar down arrow.

02:02: Stops scrolling and closes browser auto-complete popup.

02:03 – 02:08: No action.

02:09 – 02:10: Moves cursor to up arrow on the browser scroll bar and starts scrolling.

02:11 – 02:15: No action.

02:16 – 02:18: Moves cursor to ‘add filter: Theses’ and clicks the link [refined search results load].

02:19 – 02:26: Moves cursor to browser scroll bar on the right and clicks down arrow to scroll down the page.

02:27 – 02:30: Moves cursor to up arrow on scroll bar and scrolls to top of page.

02:31 – 02:32: Moves cursor to browser back button and clicks it [search page loads].

02:33 – 02:40: Moves cursor to search field and clicks in it [unnecessary action because field is constantly available for typing, i.e. flashing cursor].

02:41 – 02:44: Types search term.

02:45 – 02:51: Moves cursor to refine option ‘Limit to’ and opens dropdown menu, selects ‘Theses’.

02:52 – 03:00: Moves cursor to option to search ‘within’ and opens dropdown menu, selects the ‘Title (omit ‘The’ & other leading articles)’ option.

03:01 – 03:03: Moves cursor to search button and clicks it [new page does not load because no search results are produced.

03:04 – 03:09: Moves cursor to search field and clicks in it.

03:10 – 03:12: Previous search term is highlighted, deletes this by hitting backspace on the keyboard.

03:13 – 03:18: Types new search term.

The first element, in the green text, is an example of how even electronic resources well-known to us can disrupt the sequencing of search. In the above, MS was looking for information on a topic he had previously researched, so he knew that there was literature available. In this specific search, he was trying to find just theses on the topic, but despite using a filter called ‘Theses’ and limiting search results to ‘Theses’, he was not getting the results he wanted. In fact, at 03:01 he abandoned his current search strategy because no results whatsoever were produced. His final comment, as he gave up on the search, confirmed that many electronic resources have so far failed to attract a significant number of users because the benefits are not immediately obvious and therefore unlikely to be incorporated into users’ workflows: “So, so far all I’ve been able to do electronically is mimic what I already knew to do with physically accessing the catalogues in the libraries so less effective than we’d wish it to be”.

The second element, in the red text, demonstrates how workflows can be impeded by the way users interact with the actual equipment, a mouse and/or a keyboard, for example. MS is the eldest of our participants and PCs only became widely available at a late stage in his academic career. He confesses himself that he sometimes struggles with ‘getting the technology to do what he wants’ but is keen to learn and not dismissive of it. From the actions highlighted it is obvious that the process of scrolling on a web page takes up a significant portion of his time because the mouse has to be grabbed, the mouse pointer moved to the browser’s scroll bar/up-down arrows before engaging in the actual task of scrolling. This is a lot less cumbersome for our other participants because they have found more efficient ways of using the equipment: scroll wheel on mouse (KR and RB), scroll function on a Mac touch pad (SE) or keyboard arrow keys (JM). While developers cannot change the way users scroll, they can ensure that all diversities of action are accommodated by enabling various modes of scrolling.

Having a user-friendly interface that tailors to a variety of practices can limit the number of disruptions and make first-time users of a resource more likely to return. This is shown in the example where SE, who is an advanced user, expects the technology to match his level of proficiency. If his workflow is to remain uninterrupted, the browser’s back button or tabbed browsing, for example, must be enabled and retain the function that has come to be universally accepted across platforms. In comparison with MS, SE’s actions to do with navigating the site generally do not last more than 3-4 seconds and several 1-second actions are observed (in the blue text). This is despite the fact that SE is browsing an unfamiliar site, whereas MS was consulting a resource he had used before.

2.2. SE, Design Group 1: Critique of Resources Suggested by the Project Team (Mac laptop, uses keypad)

00:49 – 00:52: [On home page] hovers over various menu options without choosing any.

00:53: Clicks link to go to ‘Beta Search’.

00:54 – 01:02: Pop-up window appears and starts loading.

01:03 – 01:05: Clicks green button at top of window [Mac feature] – window size is slightly increased.

01:06: Enlarges window even further by dragging the right side outwards using the touchpad.

01:07 – 01:10: Enlarges window on the left side using the same method as above.

01:11: Clicks red button at top of window [Mac feature] to close it.

01:12 – 01:16: Clicks link to go to ‘Beta Search’ but uses touchpad and keyboard in combination to open new tab instead of popup window.

01:17 – 01:22: New tab loads.

01:23 – 01:36: Scrolls page using touchpad.

01:37 – 01:39: Types search term in search box [cursor constantly active in box so no need to click in it to type].

01:40: Hits enter to execute search.

01:41 – 01:45: Moves cursor to top of screen while search results load.

01:46 – 01:52: Hovers over various refine option on screen, chooses ‘Sort by oldest’ [search results are re-ordered].

01:53 – 02:01: Browses the first search result’s details.

02:02 – 02:03: Clicks ‘more’ option under that search result’s ‘Extracted terms’ section [more terms appear].

02:04 – 02:11: Resumes browsing the first search result.

02:12 – 02:21: Picks up a pen and makes a note.

02:22 – 02:29: Resumes browsing.

02:30 – 02:32: Selects ‘pdf’ option under ‘Available Formats’ for the first search result [popup window appears].

02:33 – 02:38: Hovers cursor over the different options in popup window.

02:39: Selects ‘Close window’ [popup window closes].

02:40 – 02:54: Selects ‘View’ option under ‘Available formats’ for the first search results, removes hand from laptop while new page loads.

02:55 – 03:00: [On search result page] scrolls down page using touchpad.

03:01: Scrolls back top of page.

03:02: Goes to next page by clicking right arrow in grey panel on right side of image [new image loads].

03:03 – 03:07: Scrolls down page using touchpad.

03:08: Scrolls back to top of page.

Above is an example of how a user-friendly interface immediately allows an advanced user to ‘jump’ straight into the actual research. Yet, what happens when a less advanced user encounters a system that is not only user-friendly, but also a reflection of his personal research practice? This was tested in the final Design Group.

In the last example below, we return to MS and his testing of the second iteration of the prototype. At this stage, the design process had included several instances of the PD pattern of critique-discuss-create, and the project team were anxious to test if the prototype was indeed what the participants had asked for. The first iteration had been tested during DG2, but the browse feature here tested by MS was not available then, so this is a genuine record of his first encounter and an indication of the ‘user-friendliness’ of our system.

2.3. MS, Design Group 3: Prototype Testing (PC laptop, uses touch pad)

06:54: Clicks ‘Search the collection’ [new page loads].

06:55 – 07:32: Moves cursor to browser scroll bar on the right and occasionally clicks down or up arrows to scroll the page.

07:33: Clicks the title of the first issue listed [content is expanded to show a list of available issues].

07:33 – 07:43: No action.

07:44: Clicks first issue listed [new page loads].

07:45 – 07:55: No action.

Stops to do an unrelated action, returns to testing at 08:25.

08:25 – 08:34 : No action.

08:35: Scrolls down page using the down arrow on the keyboard.

08:36 – 08:50: No action.

08:50 – 09:06: Moves cursor back and forth across the page.

09:07 – 09:10: Zooms in on the image using the ‘+’ key on the keyboard.

09:11 – 09:19: No action.

09:19 – 09:21: Zooms out on the image using the ‘-’ symbol on the keyboard.

The first thing to notice is that the number of 1-second actions has increased. In the first example, two one second actions were noted at 01:52 and 02:02, respectively. However, when looking at them more closely, it is clear that those actions were actually nothing to do with the search process, i.e. he put his pen down and he closed a browser pop-up message. This suggests that the interface was not intuitive enough for a less advanced user to navigate at speed. In the first example, MS also tended to use the browser’s right-hand scroll feature, making browsing a page quite time-consuming because he had to move the cursor to the up or down arrow first. In this example, we see instances of the arrow keys on the keyboard being used to both scroll and zoom. Whether this is a coincidence is uncertain, but it is interesting that this change in behaviour occurs after 07:44 when he has landed on a page where instructions on how to navigate are displayed directly beneath the image of the page, the element that most users’ eyes would be drawn to first. While the majority of sites will have this information somewhere, it is now always in a prominent position, which could imply that user instructions must be immediately visible if they are to be at all beneficial.

On a practical level and in terms of ease of use, our prototype has been very well received by the Design Group participants. To establish whether this benefits the research process in general, it has to be rolled out to more users. Yet, based on MS’s brief interaction above, our resource does seem easier to use by those who usually struggle with more complex interfaces. In fact, MS’s actions look more similar to SE’s. The navigation sequences have been reduced and the ‘no action’ sequences, which we interpret as engaging with the content rather than the interface, have been extended. Overall, this is a promising sign that a resource built following PD will be more sustainable in the long run because it lets users concentrate on the research process and keeps disruptions of that process to a minimum.

previous page | next page

Footnotes

  1. Ball, M. and Smith, G. (2011) ‘Ethnomethodology and the Visual: Practices of Looking, Visualization, and Embodied Action’, in Margolis, E. and Pauwels, L. (2011) The SAGE Handbook of Visual Research Methods, p.392. doi: 10.4135/9781446268278.
  2. Lynch, M. (1993) Scientific practice and ordinary action: Ethnomethodology and social studies of science, Cambridge: CUP, pp. 4-5.
  3. Francis, D. and Hester, S. (2004) An Invitation to Ethnomethodology: Language, Society and Social Interaction, London: SAGE Publications, p. 28. doi: 10.4135/9781849208567.
  4. Crabtree, A. et al. (2000) ‘Ethnomethodologically Informed Ethnography and Information System Design’, Journal of the American Society for Information Science, vol. 51, no. 7, pp. 666-82: p.671.