Why you should stop making Breadcrumbs
Breadcrumb navigation, known to some as cookie crumb navigation or navigational path to others, where a path like structure is displayed, most commonly at the top of the current page, that typically shows you the path of pages that you as the user took to get to the page where you are now is a well known staple of many websites. And, when implemented properly, is a very helpful feature on many websites. But for every good implementation there is a bad one, and all the bad ones are there usually due to an age old mistake: trying to fix a problem by addressing the symptoms instead of solving the actual underlying problem. A problem that should not have been here in the first place. A problem that even the good implementations usually fail to address which in turn has created a much larger problem.
But before we go into the shortcomings of good implementations, lets delve into the bad ones first. And lets start with one of the more egregious ones;
The non-functional or non-interactive breadcrumbs.
We see them regularly in every day life, most notably when a web site forces you into an isolated flow, like a checkout flow of a web shop or an on-line application form. The practice of isolating is often a good one, limiting distractions and possibilities to accidentally exit the flow while focusing user attention to a structured data entry path. But very often this flow is tainted by superfluous information in the shape of a static breadcrumb, e.g. the breadcrumb steps in a web shop’s checkout flow where it shows "1. Personal info - 2. Delivery method - 3. Payment" or similar. This is a clear example of a non-functional that was created by committee, unable to see it for what it is.
When you flip your position and look at it from the person checking out, it gives no useful information at all. It could just as well say "1/3 - 2/3 - 3/3" and it would be just as (or maybe slightly more) helpful. Leaving it out and adding "(1/3)" and "(2/3)" to the inevitable "Next" buttons, i.e. "Next (1/3)", "Next (2/3)" and "Place order", that accompany each stepped page would still give me the same information in how long the process is going to take and as the person filling out the flow that is the only indicator I’m interested in. I have no interest, nor does it help me, to be either reminded of what I had just entered previously or to know what will be expected of me later; I know what steps I have to go through and only want to know how long it will take before I can click "Place" on my order.
When the breadcrumb is interactive and allows me to correct a mistake I remembered in step 1 while I’m on step 3, then the story changes and the pattern does have merit, but so often this is not the case. Which, as an experience, makes the existence of the breadcrumb even worse. Showing me a path but not offering me a way to follow the crumbs back to an earlier step/location on my path has always bothered me as a design pattern. What are you offering your end users in this pattern? An insight into the inner workings of your business logic layer? What benefit is it to me as an end user to have a description of a part of the site that you will not allow me to interact with or navigate to?
Granted it is rare, but this non-interactive pattern can be seen in more prominent places as well, notably web shops that show a detailed breadcrumb path to the product you are viewing, neatly categorising and sub categorising down to your product but then don’t allow you to click on one of the category crumbs to go directly to the listing of that part of the shops offerings. Please stop doing that.
The misleading breadcrumb.
But there is a more troubling problem with breadcrumbs, one exasperated by the fact that from the developers point of view it is often seen as minor problem to even a beneficial feature. This pattern is most prevalent on e-commerce websites where products fall within multiple categories and as a result multiple category crumbs can exist on the same level where, in the worst case scenario, all categories have the same name and/or description. In these cases it is very easy to misprogram a site to have a category listing where, after clicking on a product and going one level down in the breadcrumb hierarchy, returning to the category page via the crumb brings you to a different category listing that, again in the worst case, no longer contains the product that was clicked down to in the first place.
This pattern is often seen as minor and sometimes is defended by the flawed thinking "it is better for the user to have a wrong page to return to rather than no page at all". In reality, the converse is true: having unpredictable behaviour in your category listings will quickly erode trust in your website, driving users to find alternative ways to navigate the website or, in extreme cases, use Google to find explicit products for them and only clicking directly down to a product detail page ignoring every link on your site. And this happens more than you realise.
So what about correctly working breadcrumbs?
Technically correctly implemented breadcrumbs are okay then, right. Well, there is still a problem there but that is not so much with the breadcrumb itself but a side effect of modern implementations of them.
To explain it we will have to take a trip down memory lane…
… to a time some thirty odd years ago, back to the beginning of the 90’s and the birth of the World Wide Web.
Upon first release, web sites were completely made up of static content, that is to say that a page once loaded and rendered in a web browser would not have any kind of dynamic elements. The first HTTP standard only supported a GET method making it a read only medium. Early web servers could be considered static in a similar vein, in that each page described by a URL would directly relate to a file on the web server’s file system. A URL like http://www.example.com/index.htm would have a index.htm file in the root directory for the designated web server. A URL like http://www.example.com/help/q-and-a/index.htm would be an index.htm file in the help/q-and-a directory under the root directory for the web server. In this early time we had an inherent, clear and hierarchically correct at all times breadcrumb in the form of the URL itself. In addition to being easy to understand, a well organized web site would even allow for adventurous visitors to try and guess page names to find their own way or to fix mistakes by logical deduction. It engaged people in a way that made them curious and adventurous.
Though the static server was suitable for the average informative website in the epoch of the World Wide Web, the need to accommodate large amounts of data that would not lend itself to a straight forward tree like organizational structure triggered the introduction of servers capable of dynamically creating content. This dynamic creation of content should not be confused for dynamic content, the former addressing the way content was gathered on the server side to be sent to the browser while the latter concerns itself with rendering dynamic elements once displayed in the browser. The introduction of the ability to trigger programs on the server via CGI started the first departures from the straight forward URLs. URLs like http://www.example.com/help/installation-errors/error.cgi?id=80005 would replace the former HTML file name part in the URL that would have described the topic. The departure from purely static URLs to early dynamic ones would move some of the descriptive part of the URL to the content of the page served, but would still, on a well organized web site, be clear enough to inherently function as a breadcrumb. And, though there was no logical word play involved, in some respects going on an adventure and guessing other pages was made easier.
Around the same time as the introduction of the dynamic server we also see the ability to POST arrive, changing the unidirectional nature of the web site to a bi-directional system. Early adopting types of web sites were web forums and this is also where we first see the first signs of the disconnect from predictable URL patterns. URLs for these kinds of applications would start taking the form of http://www.example.com/topic.cgi?id=417 where the file topic.cgi would not be returned as content but instead execute applications that could potentially create any kind of content with no way for the visitor of the web site to know, nor indeed need to know, what the topic.cgi file actually did to create the content served. But while this didn’t affect the end user in consuming the content, it robbed the user from a way to determine his location on the site. The URL would give no indication whether the user was in "general" or the "help, subsection Q&A" of the forum visited.
Fast forward a few more years and we see that with the advent of fully programmable web servers with new capabilities such as server side session keeping, access management, etc. URL patterns start to become more and more complicated, changing to almost unrecognizable and certainly unreadable long technical sentences like http://www.example.com/home.php?sid=6f552469adebd82cc71ce2b9b9cf16f6&ref=Zmlyc3RuYW1lLmxhc3RuYW1lQGV4YW1wbGUubG9uZy5lbWFpbC5jb20%3D&upc=2622628700025. Since then, convoluted and complicated URLs that favour technical necessity over readability have increasingly taken over URL design, leaving users with a lack in functionality where finding ones place on a website was concerned. This moment more or less coincides with the broad adoption of the breadcrumb pattern as a standard component of navigation heavy websites.
So, we had a problem and as an industry we solved it, right? Well, we side stepped a problem we created by adding a feature to negate one of its symptoms, and by doing so inadvertently created, in part, a larger problem: the waning knowledge and confidence in the use of direct URLs by the general public.
When we fast forward to today, we see a lot of thriving malware as a result of the near incomprehensible URLs where bad actors are able to insert fishing URLs unnoticed. In the early days a fishing attempt like http://www.bank.com%2Fmalware%2Eru would raise red flags immediately as they stood out to a point that they would merit investigation, even by the general public out there. But in the complicated world of overly dynamic URL patterns http://www.bank.com%2FMaLWaRe%2ErU?login=Zmlyc3RuYW1lLmxhc3RuYW1lQGV4YW1wbGUubG9uZy5lbWFpbC5jb20%3D&upc=2622628700025 blends into the background and is not questioned by the public at large any more: they have become disinterested to a point of being afraid of doing something wrong in the address bar.
By attacking the core problem, the problem we created in the first place, we can reverse the ever growing divide between the average user and the web developers understanding of the web, re-engage the general public in using parts of the browser they were designed to be engaged in and hopefully, in time, get the public confidence back to a level where they are once again able to feel included in what has become an indispensable part of every persons everyday life.
And fast forwarded to today, there is no need for this problem to persist, not any more. The further improvements in web server technology has made it as easy to generate and use dynamically created URLs as the effort to name and display each crumb in the breadcrumb paths. If you can create the visual breadcrumb "Home > Cables > Display > HDMI to DP > Budget 3m micro-HDMI to DP" to be rendered on the page then how hard is it to change to URL to http://www.example.com/cables/display/hdmi-to-dp?productid=2622628700025?