All posts by Michael Daul

Bringing 500 Years of Women’s Work Online

Back in 2015, Lisa Unger Baskin placed her extensive collection of more than 11,000 books, manuscripts, photographs, and artifacts in the Sallie Bingham Center for Women’s History & Culture in the David M. Rubenstein Rare Book & Manuscript Library. In late February of 2019, the Libraries opened the exhibit “Five Hundred Years of Women’s Work: The Lisa Unger Baskin Collection” presenting visitors with a first look at the diversity and depth of the collection, revealing the lives of women both famous and forgotten and recognizing their accomplishments. I was fortunate to work on the online component of the exhibit in which we aimed to offer an alternate way to interact with the materials.

Homepage of the exhibit
Homepage of the exhibit

Most of the online exhibits I have worked on have not had the benefit of a long planning timeframe, which usually means we have to be somewhat conservative in our vision for the end product. However, with this high-profile exhibit, we did have the luxury of a (relatively) generous schedule and as such we were able to put a lot more thought and care into the planning phase. The goal was to present a wide range and number of items in an intuitive and user-friendly manner. We settled on the idea of arranging items by time period (items in the collection span seven centuries!) and highlighting the creators of those items.

We also decided to use Omeka (classic!) for our content management system as we’ve done with most of our other online exhibits. Usually exhibit curators manually enter the item information for their exhibits, which can get somewhat tedious. In this case, we were dealing with more than 250 items, which seemed like a lot of work to enter one at a time. I was familiar with the CSV Import plugin for Omeka, which allows for batch uploading items and mapping metadata fields. It seemed like the perfect solution to our situation. My favorite feature of the plugin is that it also allows for quickly undoing an ingest in case you discover that you’ve made a mistake with mapping fields or the like, which made me less nervous about applying batch uploads to our production Omeka instance that already contained about 1,100 items.

Metadata used for batch upload
Metadata used for batch upload

Working with the curators, we came up with a data model that would nest well within Omeka’s default Dublin-core based approach and expanded that with a few extra non-standard fields that we attached to a new custom item type. We then assembled a small sample set of data in spreadsheet form and I worked on spinning up a local instance of Omeka to test and make sure our approach was actually going to work! After some frustrating moments with MAMP and tracking down strange paths to things like imagemagick (thank you eternally, Stack Overflow!) I was able to get things running well and was convinced the batch uploads via spreadsheet was a good approach.

Now that we had a process in place, I began work on a custom theme to use with the exhibit. I’d previously used Omeka Foundation (a grid-based starter theme using the Zurb Foundation CSS framework) and thought it seemed like a good place to start with this project. The curators had a good idea of the site structure that they wanted to use, so I jumped right in on creating some high-fidelity mockups borrowing look-and-feel cues from the beautiful print catalog that was produced for the exhibit. After a few iterations we arrived at a place where everyone was happy and I started to work on functionality. I also worked on incorporating a more recent version of the Foundation framework as the starter theme was out of date.

Print catalog for the exhibit
Print catalog for the exhibit

The core feature of the site would be the ability to browse all of the items we wanted to feature via the Explore menu, which we broke into seven sections — primarily by time period, but also by context. After looking at some other online exhibit examples that I thought were successful, we decided to use a masonry layout approach (popularized by sites like Pinterest) to display the items. Foundation includes a great masonry plugin that was very easy to implement. Another functionality issue had to do with displaying multi-page items. Out of the box, I think Omeka doesn’t do a great job displaying items that contain multiple images. I’ve found combining them into PDFs works much better, so that’s what we did in this case. I also installed the PDF Embed plugin (based on the PDF.js engine) in order to get a consistent experience across browsers and platforms.

Once we got the theme to a point that everyone was happy with it, I batch imported all of the content and proceeded with a great deal of cross-platform testing to make sure things were working as expected. We also spent some time refining the display of metadata fields and making small tweaks to the content. Overall I’m very pleased with how everything turned out. User traffic has been great so far so it’s exciting to know that so many people have been able to experience the wonderful items in the exhibit. Please check out the website and also come visit in person — on display until June 15, 2019.

Examples of 'Explore' and 'Item' pages
Examples of ‘Explore’ and ‘Item’ pages

Bringing ‘Views of the Great War’ to life

I recently worked on an interactive kiosk for a new exhibit in the library — Views of the Great War: Highlights from the Duke University Libraries. Elizabeth Dunn, the exhibit curator, wanted to highlight a series of letters that shared the experiences of two individuals during the war. She was able to recruit the talents of Jaybird O’Berski and Ahnna Beruk who brought the writings of Frederick Trevenen Edwards and Ann Henshaw Gardiner to life.


letter excerpt
Excerpt from Edwards’ June 9, 1918 Letter

 

Elizabeth and I booked time for Jay and Ahhna in the Multimedia Production Studio where we recorded their performances. I then edited down multiple takes into more polished versions and created files I could use with the kiosk. I also used youtube’s transcript tool to get timed captions working well and to export VTT files.

Here is an example:

 

The final interface allows users to both listen to the performances and read timed transcriptions of the letters while also being able to scroll through the original typed versions.


screenshot of interface

screenshot of interface

screenshot of interface
Screenshots of the kiosk interface

 

The exhibit is housed in the Mary Duke Biddle Room and runs through February 16. Come check it out!

Baby Advice (from Duke Digital Collections)

As a recent first-time parent, I’m constantly soliciting advice from other more experienced people I meet about how best to take care of my baby. I thought it might be fun to peruse the Duke Digital Collections to see what words of wisdom could be gleaned from years past.


This 1930 ad, part of the Medicine and Madison Avenue collection, advises that we should make regular visits to the doctor’s office so our baby can be ‘carefully examined, measured, weighed and recorded.’ Excellent – we’ve been doing that!


This one from 1946 offers the ‘newest facts and findings on baby care and feeding.’ Overall the advice largely seems applicable to today. I found this line to be particularly fun:

Let your child participate in household tasks — play at dusting or cooking or bedmaking. It’s more of a hindrance than a help, but it gives the youngster a feeling of being needed and loved.


Baby strollers seem to have many innovative features these days, but more than 100 years ago the Oriole Go-Basket — a ‘combined Go-Cart, High Chair, Jumper and Bassinet’ — let you take your baby everywhere.


I’m a puzzled father of a rather young child — it’s like this 1933 ad was written just for me…


Of all the digitized materials in the collections I searched through, this 1928 booklet from the Emergence of Advertising in America collection seems to hold the most knowledge. On page 15, they offer ‘New Ways to Interest the Whole Family’ and suggest serving ‘Mapl-Flake’ and ‘Checkr-Corn Flake’ — are these really Web startups from 2005?


And finally, thanks to this 1955 ad, I’m glad to know I can give my baby 7-up, especially when mixed in equal parts with milk. ‘It’s a wholesome combination — and it works!’

Yasak/Banned Kiosk

Recently I worked on a simple kiosk for a new exhibit in the library, Yasak/Banned: Political Cartoons from Late Ottoman and Republican Turkey. The interface presents users with three videos featuring the curators of the exhibit that explain the historical context and significance of the items in the exhibit space. We also included a looping background video that highlights many of the illustrations from the exhibit and plays examples of Turkish music to help envelop visitors into the overall experience.

With some of the things I’ve built in the past, I would setup different section of an interface to view videos, but in this case I wanted to keep things as simple as possible. My plan was to open each video in an overlay using fancybox. However, I didn’t want the looping background audio to interfere with the curator videos. We also needed to include a ‘play/pause’ button to turn off the background music in case there was something going on in the exhibit space. And I wanted all these transitions to be as smooth as possible so that the experience wouldn’t be jarring. What seemed reasonably simple on the surface proved a little more difficult than I thought.

After trying a few different approaches with limited success, as usual stackoverflow revealed a promising direction to go in — the key turned out to be the .animate jQuery method.

The first step was to write functions to ‘play/pause’ – even though in this case we’d let the background video continue to play and only lower the volume to zero. The playVid function sets the volume to 0, then animates it back up to 100% over three seconds. pauseVid does the inverse, but more quickly.

function playVid() {
  vid.volume = 0;
  $('#myVideo').animate({
    volume: 1
  }, 3000); // 3 seconds
  playBtn.style.display = 'none';
  pauseBtn.style.display = 'block';
}

function pauseVid() {
  // really just fading out music
  vid.volume = 1;
  $('#myVideo').animate({
    volume: 0
  }, 750); // .75 seconds

  playBtn.style.display = 'block';
  pauseBtn.style.display = 'none';
}

The play and pause buttons, which are positioned on top of each other using CSS, are set to display or hide in the functions above. The are also set to call the appropriate function using an onclick event. Their markup looks like this:

<button onclick="playVid()" type="button" id="play-btn">Play</button>
<button onclick="pauseVid()" type="button" id="pause-btn">Pause</button>

Next I added an onclick event calling our pause function to the link that opens our fancybox window with the video in it, like so:

<a data-fancybox-type="iframe" href="https://my-video-url" onclick="pauseVid();"><img src="the-video-thumbnail" /></a>

And finally I used the fancybox callback afterClose to ramp up the background video volume over three seconds:

afterClose: function () {
  vid.volume = 0;
  $('#myVideo').animate({
    volume: 1
  }, 3000);
  playBtn.style.display = 'none';
  pauseBtn.style.display = 'block';
},

play/pause demo
play/pause demo

You can view a demo version and download a zip of the assets in case it’s helpful. I think the final product works really well and I’ll likely use a similar implementation in future projects.

SNCC Digital Gateway Homepage Updates

Earlier this summer I worked with the SNCC Digital Gateway team to launch a revised version of their homepage. The SNCC Digital Gateway site originally was launched in the Fall of 2016. Since then much more content has been incorporated into the site. The team and their advisory board wanted to highlight some of this new content on the homepage (by making it scrollable) while also staying true to the original design.

The previous version of the homepage included two main features:

  • a large black and white photograph that would randomly load (based on five different options) every time a user visited the page
  • a ‘fixed’ primary navigation in the footer

Rotating Background Images

In my experience, the ‘go to’ approach for doing any kind of image rotation is to use a Javascript library, probably one that relies on jQuery. My personal favorite for a long time has been jQuery Cycle 2 which I appreciated for it’s lightweight, flexible implementation, and price ($free!). With the new SNCC homepage, I wanted to figure out a way to both crossfade the background images and fade the caption text in and out elegantly. It was also critical that the captions match up perfectly with their associated images. I was worried that doing this with Cycle 2 was going to be overly complicated with respect to syncing the timing, as in some past projects I’d run into trouble keeping discrete carousels locked in sync after several iterations — for example, with leaving the page up and running for several minutes.

I decided to try and build the SNCC background rotation using CSS animations. In the past I’d shied away from using CSS animations for anything that was presented as a primary feature or that was complex as the browser support was spotty. However, the current state of browser support is better, even though it still has a ways to go. In my first attempt I tried crossfading the images as backgrounds in a wrapper div, as this was going to make things work with resizing the page much easier by using background-size: cover property. But I discovered that animating background images isn’t actually supported in the spec, even though it worked perfectly in Chrome and Opera. So instead I went with the approach where you stack the images on top of each other and change the opacity one at a time, like so:

<div class="bg-image-wrapper">
  <img src="image-1.jpg" alt="">
  <img src="image-2.jpg" alt="">
  <img src="image-3.jpg" alt="">
  <img src="image-4.jpg" alt="">
  <img src="image-5.jpg" alt="">
</div>

I setup the structure for the captions in a similar way:

<div id="home-caption">
  <li>caption 1</li>
  <li>caption 2</li>
  <li>caption 3</li>
  <li>caption 4</li>
  <li>caption 5</li>
</div>

I won’t bore you with the details of CSS animation, but in short they are based on keyframes that can be looped and applied to html elements. The one thing that proved to be a little tricky was the timing between the images and the captions, as the keyframes are represented in percentages of the entire animation. This was further complicated by the types of transitions I was using (crossfading the images and linearly fading the captions) and that I wanted to slightly stagger the caption animations so that they would come in after the crossfade completes and transition out just before the next crossfade starts, like so:

Crossfade illustration
As time moves from left to right, the images and captions have independent transitions

The SNCC team and I also discussed a few options for the overall timing of the transitions and settled on eight seconds per image. With five images in our rotation, the total time of the animation would be 40 seconds. The entire animation is applied to each image, and offset with a delay based on their position in the .bg-image-wrapper stack. The CSS for the images looks like this:

.bg-image-wrapper img {
  animation-name: sncc-fader;
  animation-timing-function: ease-in-out;
  animation-iteration-count: infinite;
  animation-duration: 40s;
}


@keyframes sncc-fader {
  0% {
    opacity:1;
  }
  16% {
    opacity:1;
  }
  21% {
    opacity:0;
  }
  95% {
    opacity:0;
  }
  100% {
    opacity:1;
  }
}

.bg-image-wrapper img:nth-of-type(1) {
  animation-delay: 32s;
}
.bg-image-wrapper img:nth-of-type(2) {
  animation-delay: 24s;
}
.bg-image-wrapper img:nth-of-type(3) {
  animation-delay: 16s;
}
.bg-image-wrapper img:nth-of-type(4) {
  animation-delay: 8s;
}
.bg-image-wrapper img:nth-of-type(5) {
  animation-delay: 0;
}

The resulting animation looks something like this:

SNCC example rotation

The other piece of the puzzle was emulating the behavior of background: cover which resizes a background image to fill the entire width of a div and positions the image vertically in a consistent way. In general I really like using this attribute. I struggled to get things working on my own, but eventually came across a great code example of how to get things working. So I copied that implementation and it worked perfectly.

Fixed Nav

I was worried that getting the navigation bar to stay consistently positioned at the bottom of the page and allowing for scrolling — while also working responsively — was going to be a bit of a challenge. But in the end the solution was relatively simple.

The navigation bar is structured in a very typical way — as an unordered list with each menu element represented as a list item, like so:

<div id="navigation">
    <ul>
      <li><a href="url1">menu item 1</a></li>
      <li><a href="url2">menu item 2</a></li>
      <li><a href="url3">menu item 3</a></li>
    </ul>
</div>

To get it to ‘stick’ to the bottom of the page, I just placed it using position: absolute, gave it a fixed height, and set the width to 100%. Surprisingly, worked great just like that, and also allowed the page to be scrolled to reveal the content further down the page.


You can view the updated homepage by visiting snccdigital.org.

508 Update, Update

A little more than a year ago, I wrote about the proposed update to the 508 accessibility standards. And about three weeks ago, the US Access Board published the final rule that contains updates to the 508 accessibility requirements for Information and Communication Technology (ICT). The rules had not previously been updated since 2001 and as such had greatly lagged behind modern web conventions.

It’s important to note that the 508 guidelines are intended to serve as a vehicle for guiding procurement, while at the same time applying to content created by a given group/agency. As such, the language isn’t always straightforward.

What’s new?

As I outlined in my previous post, a major purpose of the new rule is to move away from regulating types of devices and instead focus on functionality:


… one of the primary purposes of the final rule is to replace the current product-based approach with requirements based on functionality, and, thereby, ensure that accessibility for people with disabilities keeps pace with advances in ICT.


To that effect, one of the biggest change over the old standard is the adoption of WCAG 2.0 as the compliance level. The fundamental premise of WCAG compliance is that content is ‘perceivable, operable, and understandable’ — bottom line is that as developers, we should strive to make sure all of our content is usable for everyone across all devices. The adoption of WCAG allows the board to offload responsibility of making incremental changes as technology advances (so we don’t have to wait another 15 years for updates) and also aligns our standards in the United States with those used around the world.


Harmonization with international standards and guidelines creates a larger marketplace for accessibility solutions, thereby attracting more offerings and increasing the likelihood of commercial availability of accessible ICT options.


Another change has to do with making a wider variety of electronic content accessible, including internal documents. It will be interesting to see to what degree this part of the rule is followed by non-federal agencies.


The Revised 508 Standards specify that all types of public-facing content, as well as nine categories of non-public-facing content that communicate agency official business, have to be accessible, with “content” encompassing all forms of electronic information and data. The existing standards require Federal agencies to make electronic information and data accessible, but do not delineate clearly the scope of covered information and data. As a result, document accessibility has been inconsistent across Federal agencies. By focusing on public-facing content and certain types of agency official communications that are not public facing, the revised requirements bring needed clarity to the scope of electronic content covered by the 508 Standards and, thereby, help Federal agencies make electronic content accessible more consistently.


The new rules do not go into effect until January 2018. There’s also a ‘safe harbor’ clause that protects content that was created before this enforcement date, assuming it was in compliance with the old rules. However, if you update that content after January, you’ll need to make sure it complies with the new final rule.


Existing ICT, including content, that meets the original 508 Standards does not have to be upgraded to meet the refreshed standards unless it is altered. This “safe harbor” clause (E202.2) applies to any component or portion of ICT that complies with the existing 508 Standards and is not altered. Any component or portion of existing, compliant ICT that is altered after the compliance date (January 18, 2018) must conform to the updated 508 Standards.


So long story short, a year from now you should make sure all the content you’re creating meets the new compliance level.

Hopscotch Design Fest 2016

A few weeks ago I attended my second HopScotch Design Fest in downtown Raleigh. Overall the conference was superb – almost every session I attended was interesting, inspiring, and valuable. Compared to last year, the format this time around was centered around themed groups of speakers and shorter presentations followed by a panel discussion. I was especially impressed with two of these sessions.

Design for Storytelling

Daniel Horovitz talked about how he’d reached a point in his career where he was tired of doing design work with computers. He decided to challenge himself and create at least one new piece of art every day using analog techniques (collage, drawing, etc). He began sharing his work online which lead to increased exposure and a desire from clients to create new projects in the new style he’d developed, instead of the computer-based design work that he’d spent most of his career working on. Continued exploration and growth in his new techniques lead to working on bigger and bigger projects around the world. His talent and body of work are truly impressive and it’s inspiring to hear that creative ruts can sometime lead to reinvention (and success!).


Ekene Eijeoma began his talk by inviting us to turn to the person next to us and say three things: I see you, I value you, and I acknowledge you. This fleetingly simple interaction was actually quite powerful – it was a really interesting experience. He went on to demonstrate how empathy has driven his work. I was particularly impressed with his interactive installation Wage Islands. It visualizes which parts of New York City are really affordable for the people who live there and allows users to see how things change with increases and decreases to the minimum wage.


Michelle Higa Fox showed us many examples of the amazing work that her design studio has created. She started off talking about the idea of micro story telling and the challenges of reaching users on social media channels where focus is fleeting and pulled in many directions. Here are a couple of really clever examples:

02_polaroid_480_0722d08_cakekaratechop_1016a

Her studio also builds seriously impressive interactive installations. She showed us a very recent work that involved transparent LCD screens and dioramas housed behind the screens that were hidden and revealed based on the context, while motion graphic content could be overlaid in front. It was amazing. I couldn’t find any images online, but I did find this video of another really cool interactive wall:

One anecdote she shared, which I found particularly useful, is that it’s very important to account for short experiences when designing these kinds of interfaces, as you can’t expect your users to stick around as long as you’d like them to. I think that’s something we can take more into consideration as we build interfaces for the library.

Design for Hacking Yourself

Brooke Belk lead us through a short mindfulness exercise (which was very refreshing) and talked about how practicing meditating can really help creativity flow more easily throughout the day. Something I need to try more often! Alexa Clay talked about her concept of the misfit economy. I was amused by her stories of doing role-playing at tech conferences where she dresses as the Amish Futurist and asks deeply challenging questions about the role of technology in the modern world.

But I was mostly impressed with Lulu Miller’s talk. She formerly was a producer at Radiolab, my favorite show on NPR, and now has her own podcast called Invisibilia which is all to say that she knows how to tell a good story. She shared a poignant tale about the elusive nature of creative pursuits she called the house and the bicycle. The story intertwined her experience of pursuing a career in fiction writing while attending grad school in Portland and her neighbor’s struggle to stop building custom bicycles and finish building his house. Other themes included the paradox of intention, having faith in yourself and your work, throwing out the blueprint, and putting out what you have right now! All sage advice for creative types. It really was a lovely experience – I hope it gets published in some form soon.

Typography (and the Web)

This summer I’ve been working, or at least thinking about working, on a couple of website design refresh projects. And along those lines, I’ve been thinking a lot about typography. I think it’s fair to say that the overwhelming majority of content that is consumed across the Web is text-based (despite the ever-increasing rise of infographics and multimedia). As such, typography should be considered one of the most important design elements that users will experience when interacting with a website.

CIT Site
An early mockup of the soon-to-be-released CIT design refresh

Early on, Web designers were restricted to using certain ‘stacks’ of web-safe fonts that would hunt through the list of those available on a user’s computer until it found something compatible. Or worst-case, the page would default to using the most basic system ‘sans’ or ‘serif.’ So type design back then wasn’t very flexible and could certainly not be relied upon to render consistently across browsers or platforms. Which essentially resulted in most website text looking more or less the same. In 2004, some very smart people released sIFR which was a flashed-based font replacement technique. It ushered in a bit of a typography renaissance and allowed designers to include almost any typeface they desired into their work with the confidence that the overwhelming majority of users would see the same thing, thanks largely to the prevalence of the (now maligned) Flash plugin.

Right before Steve Jobs fired the initial shot that would ultimately lead to the demise flash, an additional font replacement technique, named Cufon, was released to the world. This approach used Scalable Vector Graphics and Javascript (instead of flash) and was almost universally compatible across browsers. Designers and developers were now very happy as they could use non-standard type faces in their work without relying on Flash.

More or less in parallel with the release of Cufon came the widespread adoption across browsers for the @font-face rule. This allowed developers to load fonts from a web server and have them render on a page, instead of relying on the local fonts a user had installed. In mid to late 2009, services like Typekit, League of Moveable Type, and Font Squirrel began to appear. Instead of outrightly selling licenses to fonts, Typekit worked on a subscription model and made various sets of fonts available for use both locally with design programs and for web publishing, depending on your membership type. [Adobe purchased Typekit in late 2011 and includes access to the service via their Creative Cloud platform.] LoMT and Font Squirrel curate freeware fonts and makes it easy to download the appropriate files and CSS code to integrate them into your site.  Google released their font service in 2010 and it continues to get better and better. They launched an updated version a few weeks ago along with this promo video:

There are also many type foundries that make their work available for use on the web. A few of my favorite font retailers are FontShop, Emigre, and Monotype. The fonts available from these ‘premium’ shops typically involve a higher degree of sophistication, more variations of weight, and extra attention to detail — especially with regard to things like kerning, hinting, and ligatures. There are also many interesting features available in OpenType (a more modern file format for fonts) and they can be especially useful for adding diversity to the look of brush/script fonts. The premium typefaces usually incorporate them, whereas free fonts may not.

Modern web conventions are still struggling with some aspects of typography, especially when it comes to responsive design. There are many great arguments about which units we should be using (viewport, rem/em, px) and how they should be applied. There are calculators and libraries for adjusting things like size, line length, ratios, and so on. There are techniques to improve kerning. But I think we have yet to find a standard, all-in-one solution — there always seems to be something new and interesting available to explore, which pretty much underscores the state of Web development in general.

Here are some other excellent resources to check out:

I’ll conclude with one last recommendation — the Introduction to Typography class on Coursera. I took it for fun a few months ago. It seemed to me that the course is aimed at those who may not have much of a design background, so it’s easily digestible. The videos are informative, not overly complex, and concise. The projects were fun to work on and you end up getting to provide feedback on the work of your fellow classmates, which I think is always fun. If you have an hour or two available for four weeks in a row, check it out!

Chapel Exhibit

Over the past few weeks I’ve been working on content for a new exhibit in the library; An Iconic Identity: Stories and Voices of Duke University Chapel. I’d like to share what we created and how they were built.

Chapel Kiosk

The exhibit is installed in the Jerry and Bruce Chappell Family Gallery near the main entrance to the library. There are many exhibit cases filled with interesting items relating to the history of Duke Chapel. A touchscreen lenovo all-in-one computer is installed in the corner and runs a fullscreen version of Chrome containing an interface built in HTML. The interface encourages users to view six different videos and also listen to recordings of sermons given by some famous people over the years (including Desmond Tutu, Dr. Martin Luther King Sr., and Billy Graham) – these clips were pulled from our Duke Chapel Recordings digital collection. Here are some screenshots of the interface:

chapel-kiosk-1
Home screen
Detail of audio files interface
Playing audio clips
Video player interface
Playing a video

Carillon Video

One of the videos featured in the kiosk captures the University Carillonneur playing a short introduction, striking the bells to mark the time, and then another short piece. I was very fortunate to be able to go up into the bell tower and record J. Samuel Hammond  playing this unique instrument.  I had no idea as to the physicality involved and listening to the bells so close was really interesting. Here’s the final version of the video:

Chapel Windows

Another space in the physical exhibit features a projection of ten different stained glass windows from the chapel. Each window scrolls slowly up and down, then cycles to the next one. This was accomplished using CSS keyframes and my favorite image transition plugin, jquery cycle2. Here’s a general idea of how it looks, only sped up for web consumption:

looping_window

Here’s a grouping of three of my favorite windows from the bunch:
windows

The exhibit will be on display until June 19 – please swing by and check it out!

508 Update

Web accessibility is something that I care a lot about. In the 15 some odd years that I’ve been doing professional web work, it’s been really satisfying to see accessibility increasingly becoming an area of focus and importance. While we’re not there yet, I am more and more confident that accessibility and universal design will be embraced not just an afterthought, but rather considered as essential and integrated at the first steps of a project.

Accessibility interests have been making headlines this past year, such as with the lawsuit filed against edX (MIT and Harvard). Whereas the edX lawsuit focused on section 504 of the Rehabilitation Act of 1973, the web world and accessibility are usually synonymous with section 508. The current guidelines were enacted in 1998 and badly in need of an update. In February of this year, the Access Board published a proposed update to the 508 standards. They are going to take a year or so to digest and evaluate all of the comments they have received. It’s expected that the new law will be published in the Federal Register around October of next year. Institutions will have six months to make sure they are compliant, which means everything needs to be ready to go around April of 2017.

I recently attended a webinar on the upcoming changes that was developed by the SSB Bart Group. Key areas of interest to me were as follows.

WCAG 2.0 will be base standard

The Web Content Accessibility Guidelines (WCAG) are general a more simplified yet also more strict set of guidelines for making content available to all users as compared to the existing 508 guidelines. The WCAG standard is adapted around the world, so the updated rule to section 508 means there will be an international focus on standards.

Focus on functional use instead of product type(s)

The rules will focus less on ‘prescriptive’ fixes and more on general approaches to making content accessible. The current rules are very detailed in terms of what sorts of devices need to do what. The new rule tends to favor user preferences in order to give users control. The goal being to try to enable the broadest range of users, including those with cognitive disabilities.

Non-web content is now covered

This applies to anything that will be publicly available from an institution, including things like PDFs, office documents, and so on. It also includes social media and email. One thing to note is that only the final document is covered, so working versions may not be accessible. Similarly, archival content is not covered unless it’s made available to the public.

Strengthened interoperability standards

These standards will apply to software and frameworks, as well as mobile and hybrid apps. However, it does not apply specifically to web apps, due to the WCAG safe harbor. But the end result should be that it’s easier for assistive technologies to communicate with other software.

Requirements for authoring tools to create accessible content

This means that editing tools like Microsoft office and Adobe Acrobat will need to output content that is accessible by default. Currently it can take a great deal of effort after the fact to make a document accessible. Often times content creators either lack the knowledge of how to make them, or can’t invest the time needed. I think this change should end up benefiting a lot of users.


In general, the intent and purpose of these changes help the 508 standards catch up to the modern world of technology. The hopeful outcome will be that accessibility is baked in to content from the start and not just included as an afterthought. I think the biggest motivator to consider is that making content accessible doesn’t just benefit disabled users, but rather it makes that content easier to use, find, etc. for everyone.