All posts by Michael Daul

508 Update, Update

A little more than a year ago, I wrote about the proposed update to the 508 accessibility standards. And about three weeks ago, the US Access Board published the final rule that contains updates to the 508 accessibility requirements for Information and Communication Technology (ICT). The rules had not previously been updated since 2001 and as such had greatly lagged behind modern web conventions.

It’s important to note that the 508 guidelines are intended to serve as a vehicle for guiding procurement, while at the same time applying to content created by a given group/agency. As such, the language isn’t always straightforward.

What’s new?

As I outlined in my previous post, a major purpose of the new rule is to move away from regulating types of devices and instead focus on functionality:


… one of the primary purposes of the final rule is to replace the current product-based approach with requirements based on functionality, and, thereby, ensure that accessibility for people with disabilities keeps pace with advances in ICT.


To that effect, one of the biggest change over the old standard is the adoption of WCAG 2.0 as the compliance level. The fundamental premise of WCAG compliance is that content is ‘perceivable, operable, and understandable’ — bottom line is that as developers, we should strive to make sure all of our content is usable for everyone across all devices. The adoption of WCAG allows the board to offload responsibility of making incremental changes as technology advances (so we don’t have to wait another 15 years for updates) and also aligns our standards in the United States with those used around the world.


Harmonization with international standards and guidelines creates a larger marketplace for accessibility solutions, thereby attracting more offerings and increasing the likelihood of commercial availability of accessible ICT options.


Another change has to do with making a wider variety of electronic content accessible, including internal documents. It will be interesting to see to what degree this part of the rule is followed by non-federal agencies.


The Revised 508 Standards specify that all types of public-facing content, as well as nine categories of non-public-facing content that communicate agency official business, have to be accessible, with “content” encompassing all forms of electronic information and data. The existing standards require Federal agencies to make electronic information and data accessible, but do not delineate clearly the scope of covered information and data. As a result, document accessibility has been inconsistent across Federal agencies. By focusing on public-facing content and certain types of agency official communications that are not public facing, the revised requirements bring needed clarity to the scope of electronic content covered by the 508 Standards and, thereby, help Federal agencies make electronic content accessible more consistently.


The new rules do not go into effect until January 2018. There’s also a ‘safe harbor’ clause that protects content that was created before this enforcement date, assuming it was in compliance with the old rules. However, if you update that content after January, you’ll need to make sure it complies with the new final rule.


Existing ICT, including content, that meets the original 508 Standards does not have to be upgraded to meet the refreshed standards unless it is altered. This “safe harbor” clause (E202.2) applies to any component or portion of ICT that complies with the existing 508 Standards and is not altered. Any component or portion of existing, compliant ICT that is altered after the compliance date (January 18, 2018) must conform to the updated 508 Standards.


So long story short, a year from now you should make sure all the content you’re creating meets the new compliance level.

Hopscotch Design Fest 2016

A few weeks ago I attended my second HopScotch Design Fest in downtown Raleigh. Overall the conference was superb – almost every session I attended was interesting, inspiring, and valuable. Compared to last year, the format this time around was centered around themed groups of speakers and shorter presentations followed by a panel discussion. I was especially impressed with two of these sessions.

Design for Storytelling

Daniel Horovitz talked about how he’d reached a point in his career where he was tired of doing design work with computers. He decided to challenge himself and create at least one new piece of art every day using analog techniques (collage, drawing, etc). He began sharing his work online which lead to increased exposure and a desire from clients to create new projects in the new style he’d developed, instead of the computer-based design work that he’d spent most of his career working on. Continued exploration and growth in his new techniques lead to working on bigger and bigger projects around the world. His talent and body of work are truly impressive and it’s inspiring to hear that creative ruts can sometime lead to reinvention (and success!).


Ekene Eijeoma began his talk by inviting us to turn to the person next to us and say three things: I see you, I value you, and I acknowledge you. This fleetingly simple interaction was actually quite powerful – it was a really interesting experience. He went on to demonstrate how empathy has driven his work. I was particularly impressed with his interactive installation Wage Islands. It visualizes which parts of New York City are really affordable for the people who live there and allows users to see how things change with increases and decreases to the minimum wage.


Michelle Higa Fox showed us many examples of the amazing work that her design studio has created. She started off talking about the idea of micro story telling and the challenges of reaching users on social media channels where focus is fleeting and pulled in many directions. Here are a couple of really clever examples:

02_polaroid_480_0722d08_cakekaratechop_1016a

Her studio also builds seriously impressive interactive installations. She showed us a very recent work that involved transparent LCD screens and dioramas housed behind the screens that were hidden and revealed based on the context, while motion graphic content could be overlaid in front. It was amazing. I couldn’t find any images online, but I did find this video of another really cool interactive wall:

One anecdote she shared, which I found particularly useful, is that it’s very important to account for short experiences when designing these kinds of interfaces, as you can’t expect your users to stick around as long as you’d like them to. I think that’s something we can take more into consideration as we build interfaces for the library.

Design for Hacking Yourself

Brooke Belk lead us through a short mindfulness exercise (which was very refreshing) and talked about how practicing meditating can really help creativity flow more easily throughout the day. Something I need to try more often! Alexa Clay talked about her concept of the misfit economy. I was amused by her stories of doing role-playing at tech conferences where she dresses as the Amish Futurist and asks deeply challenging questions about the role of technology in the modern world.

But I was mostly impressed with Lulu Miller’s talk. She formerly was a producer at Radiolab, my favorite show on NPR, and now has her own podcast called Invisibilia which is all to say that she knows how to tell a good story. She shared a poignant tale about the elusive nature of creative pursuits she called the house and the bicycle. The story intertwined her experience of pursuing a career in fiction writing while attending grad school in Portland and her neighbor’s struggle to stop building custom bicycles and finish building his house. Other themes included the paradox of intention, having faith in yourself and your work, throwing out the blueprint, and putting out what you have right now! All sage advice for creative types. It really was a lovely experience – I hope it gets published in some form soon.

Typography (and the Web)

This summer I’ve been working, or at least thinking about working, on a couple of website design refresh projects. And along those lines, I’ve been thinking a lot about typography. I think it’s fair to say that the overwhelming majority of content that is consumed across the Web is text-based (despite the ever-increasing rise of infographics and multimedia). As such, typography should be considered one of the most important design elements that users will experience when interacting with a website.

CIT Site
An early mockup of the soon-to-be-released CIT design refresh

Early on, Web designers were restricted to using certain ‘stacks’ of web-safe fonts that would hunt through the list of those available on a user’s computer until it found something compatible. Or worst-case, the page would default to using the most basic system ‘sans’ or ‘serif.’ So type design back then wasn’t very flexible and could certainly not be relied upon to render consistently across browsers or platforms. Which essentially resulted in most website text looking more or less the same. In 2004, some very smart people released sIFR which was a flashed-based font replacement technique. It ushered in a bit of a typography renaissance and allowed designers to include almost any typeface they desired into their work with the confidence that the overwhelming majority of users would see the same thing, thanks largely to the prevalence of the (now maligned) Flash plugin.

Right before Steve Jobs fired the initial shot that would ultimately lead to the demise flash, an additional font replacement technique, named Cufon, was released to the world. This approach used Scalable Vector Graphics and Javascript (instead of flash) and was almost universally compatible across browsers. Designers and developers were now very happy as they could use non-standard type faces in their work without relying on Flash.

More or less in parallel with the release of Cufon came the widespread adoption across browsers for the @font-face rule. This allowed developers to load fonts from a web server and have them render on a page, instead of relying on the local fonts a user had installed. In mid to late 2009, services like Typekit, League of Moveable Type, and Font Squirrel began to appear. Instead of outrightly selling licenses to fonts, Typekit worked on a subscription model and made various sets of fonts available for use both locally with design programs and for web publishing, depending on your membership type. [Adobe purchased Typekit in late 2011 and includes access to the service via their Creative Cloud platform.] LoMT and Font Squirrel curate freeware fonts and makes it easy to download the appropriate files and CSS code to integrate them into your site.  Google released their font service in 2010 and it continues to get better and better. They launched an updated version a few weeks ago along with this promo video:

There are also many type foundries that make their work available for use on the web. A few of my favorite font retailers are FontShop, Emigre, and Monotype. The fonts available from these ‘premium’ shops typically involve a higher degree of sophistication, more variations of weight, and extra attention to detail — especially with regard to things like kerning, hinting, and ligatures. There are also many interesting features available in OpenType (a more modern file format for fonts) and they can be especially useful for adding diversity to the look of brush/script fonts. The premium typefaces usually incorporate them, whereas free fonts may not.

Modern web conventions are still struggling with some aspects of typography, especially when it comes to responsive design. There are many great arguments about which units we should be using (viewport, rem/em, px) and how they should be applied. There are calculators and libraries for adjusting things like size, line length, ratios, and so on. There are techniques to improve kerning. But I think we have yet to find a standard, all-in-one solution — there always seems to be something new and interesting available to explore, which pretty much underscores the state of Web development in general.

Here are some other excellent resources to check out:

I’ll conclude with one last recommendation — the Introduction to Typography class on Coursera. I took it for fun a few months ago. It seemed to me that the course is aimed at those who may not have much of a design background, so it’s easily digestible. The videos are informative, not overly complex, and concise. The projects were fun to work on and you end up getting to provide feedback on the work of your fellow classmates, which I think is always fun. If you have an hour or two available for four weeks in a row, check it out!

Chapel Exhibit

Over the past few weeks I’ve been working on content for a new exhibit in the library; An Iconic Identity: Stories and Voices of Duke University Chapel. I’d like to share what we created and how they were built.

Chapel Kiosk

The exhibit is installed in the Jerry and Bruce Chappell Family Gallery near the main entrance to the library. There are many exhibit cases filled with interesting items relating to the history of Duke Chapel. A touchscreen lenovo all-in-one computer is installed in the corner and runs a fullscreen version of Chrome containing an interface built in HTML. The interface encourages users to view six different videos and also listen to recordings of sermons given by some famous people over the years (including Desmond Tutu, Dr. Martin Luther King Sr., and Billy Graham) – these clips were pulled from our Duke Chapel Recordings digital collection. Here are some screenshots of the interface:

chapel-kiosk-1
Home screen
Detail of audio files interface
Playing audio clips
Video player interface
Playing a video

Carillon Video

One of the videos featured in the kiosk captures the University Carillonneur playing a short introduction, striking the bells to mark the time, and then another short piece. I was very fortunate to be able to go up into the bell tower and record J. Samuel Hammond  playing this unique instrument.  I had no idea as to the physicality involved and listening to the bells so close was really interesting. Here’s the final version of the video:

Chapel Windows

Another space in the physical exhibit features a projection of ten different stained glass windows from the chapel. Each window scrolls slowly up and down, then cycles to the next one. This was accomplished using CSS keyframes and my favorite image transition plugin, jquery cycle2. Here’s a general idea of how it looks, only sped up for web consumption:

looping_window

Here’s a grouping of three of my favorite windows from the bunch:
windows

The exhibit will be on display until June 19 – please swing by and check it out!

508 Update

Web accessibility is something that I care a lot about. In the 15 some odd years that I’ve been doing professional web work, it’s been really satisfying to see accessibility increasingly becoming an area of focus and importance. While we’re not there yet, I am more and more confident that accessibility and universal design will be embraced not just an afterthought, but rather considered as essential and integrated at the first steps of a project.

Accessibility interests have been making headlines this past year, such as with the lawsuit filed against edX (MIT and Harvard). Whereas the edX lawsuit focused on section 504 of the Rehabilitation Act of 1973, the web world and accessibility are usually synonymous with section 508. The current guidelines were enacted in 1998 and badly in need of an update. In February of this year, the Access Board published a proposed update to the 508 standards. They are going to take a year or so to digest and evaluate all of the comments they have received. It’s expected that the new law will be published in the Federal Register around October of next year. Institutions will have six months to make sure they are compliant, which means everything needs to be ready to go around April of 2017.

I recently attended a webinar on the upcoming changes that was developed by the SSB Bart Group. Key areas of interest to me were as follows.

WCAG 2.0 will be base standard

The Web Content Accessibility Guidelines (WCAG) are general a more simplified yet also more strict set of guidelines for making content available to all users as compared to the existing 508 guidelines. The WCAG standard is adapted around the world, so the updated rule to section 508 means there will be an international focus on standards.

Focus on functional use instead of product type(s)

The rules will focus less on ‘prescriptive’ fixes and more on general approaches to making content accessible. The current rules are very detailed in terms of what sorts of devices need to do what. The new rule tends to favor user preferences in order to give users control. The goal being to try to enable the broadest range of users, including those with cognitive disabilities.

Non-web content is now covered

This applies to anything that will be publicly available from an institution, including things like PDFs, office documents, and so on. It also includes social media and email. One thing to note is that only the final document is covered, so working versions may not be accessible. Similarly, archival content is not covered unless it’s made available to the public.

Strengthened interoperability standards

These standards will apply to software and frameworks, as well as mobile and hybrid apps. However, it does not apply specifically to web apps, due to the WCAG safe harbor. But the end result should be that it’s easier for assistive technologies to communicate with other software.

Requirements for authoring tools to create accessible content

This means that editing tools like Microsoft office and Adobe Acrobat will need to output content that is accessible by default. Currently it can take a great deal of effort after the fact to make a document accessible. Often times content creators either lack the knowledge of how to make them, or can’t invest the time needed. I think this change should end up benefiting a lot of users.


In general, the intent and purpose of these changes help the 508 standards catch up to the modern world of technology. The hopeful outcome will be that accessibility is baked in to content from the start and not just included as an afterthought. I think the biggest motivator to consider is that making content accessible doesn’t just benefit disabled users, but rather it makes that content easier to use, find, etc. for everyone.

Hopscotch Design Fest

A few weeks ago, I had the opportunity to attend the Hopscotch Design Festival, a 2-day precursor to the music event of the same name in Raleigh, NC. The Design Fest used a very wide tent in gathering speakers from the world of design — they included urban planners, architects, musicians, and writers, in addition to more typical designer/illustrator/interactive types. While I haven’t been to that many conferences, the ones I’ve attended have usually been heavy on the tech side, typically exemplified by a sea of glowing silver macbook pros. During the opening keynote, so far as I could see, I was the only one with a laptop. This crowd was heavy on the analog side (pens and moleskines). This ethos was reinforced by Austin Kleon’s presentation on essential tools for the analog desk. I wasn’t all that familiar with Kleon, but he was clearly a very skilled presenter and offered some interesting tips on maintaining creativity. I was particularly impressed with his newspaper poetry. Overall I thoroughly enjoyed the conference and will hopefully be able to attend again in the future.

Here are some of the speakers I particularly enjoyed:


JustinJustin LeBlanc

I don’t watch much TV. But one show I really enjoy, thanks to my wife, is Project Runway. My favorite contestant, by far, has been Justin LeBlanc. Not only did he come across as a genuinely wonderful person on the show, his designs were amazing. I especially appreciated how his work incorporated non-traditional materials and technology, like 3D printing. Which is all to say that I was super excited to seem him in person. He talked a lot about his creative process, showed off some projects he’d worked on in grad school [before he hit the big time], and also showed some newer work that he’ll debut on the runway soon. He stressed that his latest work is heavily influenced by living in North Carolina. He’s collaborated with local companies to procure materials, print fabrics, and more. The whole thing felt very positive to me.


SteveSteve Frykholm

While I had never heard of Steve Frykholm before, I was immediately impressed by him. He’s been a designer at the famed Herman Miller company for 45 years. He’s clearly seen a lot of things change in the design industry over that time, so the perspective he shared was really insightful. He told an interesting story of the first Herman Miller catalog that was designed by George Nelson in 1952. The original proposal was for a highly stylized, photo-heavy book printed on nice paper — a sharp contrast to the text-heavy catalogs of the day. The top brass shot it down, saying it would be incredibly expensive to produce, and asked the team to come up with a new and more affordable version. The next iteration kept the same design, but added a bound cover and a $3 price tag. No one had ever charged for a product catalog, so this was a bold step. However, the bosses eventually relented and the catalog went on to be a huge success. The next year their competitors were charging $5 for their catalogs. [As an aside, an original copy of the catalog is available at the UNC Art Library.] His point in sharing this story was that sometimes you need to be the first at something — it’s OK to take bold steps and try something new. It won’t always work out, but sometimes it does. He also shared a bit about his creative process and how design work happens at Herman Miller. Towards the end of his time he talked about a series of posters he designed for the company’s annual Spring Picnic. These posters were recently added to the permanent collection at MoMA. I could have listened to him talk for much longer. He’s truly an inspiring individual.


CheetieCheetie Kumar

I first encountered Cheetie Kumar as the lead guitarist for her band, Birds of Avalon. I just thought she was a great musician. Then I learned she was also a recording engineer/producer, an entrepreneur, a chef and restaurateur, a designer, and generally an awesome person. So, I was excited to attend her talk. She came across to me as very humble, but she was also very inspiring. She talked about how she first settled in Raleigh and how she and her band mates / business partners have been dedicated to making it a better place ever since. She explained that they would be out on the road for months at a time then come back home only for a short time, almost like visiting, and with this fresh perspective they were able to find new and exciting things to love about the city that they probably wouldn’t have otherwise. She also highlighted the design features she came up with in creating the space for her restaurant — wood floors salvaged from a basketball court, an awning made from leftover construction material, a penny-covered floor in the bathroom, and a wall of paintings towards the back of the space. She mentioned multiple times how much hard work friends and others contributed to making it all a success. It’s literally amazing how much she juggles in her day to day life. She also said she doesn’t get a lot of sleep.


GrahamGraham Roberts

I was familiar with Graham Roberts’ work without realizing it. He’s worked on some truly amazing projects at the New York Times, such as Inside the Quartet, Music and Gesture, and Skrillex, Diplo, and Bieber make a hit. During his talk he essentially walked us through the process of working on these projects. There were way more people involved in building these things than I would have guessed. For the Kronos Quartet piece, they captured real-time 3-D data using multiple microsoft connect cameras. He then had to visualize what ended up being a staggering amount of data. The end result is beautiful; abstract, but graceful in capturing the essence of their performance movements. He also talked about what it’s like working at the Times and how he approaches his work from the perspective of a journalist, not just as a designer/animator/3D artist. In short, his work is stunning. And while it’s inspiring, in a way it’s also hard to imagine being able to create something so amazing. But I’m hopeful with the richness and diversity of our collections at DUL that we’ll continue to make our own inspiring work.

Inspiration from Italy

One project we’ve been working on recently in the Digital Projects Department is a revamped Library Exhibits website that will launch in concert with the opening of the newly renovated Rubenstein Library in August. The interface is going to focus on highlighting the exhibit spaces, items, and related events. Here’s a mockup of where we hope to be shortly:

Exhibits Teaser

On a somewhat related note, I recently traveled to Italy and was able to spend an afternoon at the Venice Biennale, which is an international contemporary art show that takes place every other year. Participating artists install their work across nearly 90 pavilions and there’s also a central gallery space for countries that don’t have their own buildings. It’s really an impressive amount of work to wander through in a single day and I wasn’t able to see everything, but many of the works I did see was amazing. Three exhibits in particular were striking to me.

Garden of Eden

The first I’ll highlight is the work of Joana Vasconcelos, titled Il Giardino dell’Eden, which was housed in a silver tent of a building from one of the event sponsors, Swatch (the watch company). As I entered I was immediately met with a dark and cool space, which was fantastic on this particularly hot and humid day. The room was filled with an installation of glowing fiber optic flowers that pulsated with different patterns of color. It was beautiful and super engaging. I spent a long time wandering through the pathway trying to take it all in.

Garden of Eden

Garden of Eden

Garden of Eden

Autonomous Trees

Another engrossing installation was housed in the French Pavilion; Revolutions by Celeste Boursier-Mougenot. I walked into a large white room where a tree with a large exposed rootball was sitting off to the side. There were deep meditative tones being projected from somewhere close by. I noticed people were lounging in the wings of the space, so I wandered over to check it out for myself. What looked like a wooden bleacher of sorts actually turned out to be made of some sort of painted foam. So as I stumbled and laughed when I tried to first walk on it, like many others who came into the space later, I plopped down to soak in the exhibit. I noticed the deep tones were subtly rhythmic and they definitely gave off a meditative vibe, so it was nice to relax a bit after a long day of walking. But then I noticed the large tree was not where it had been when I first entered the room. It was moving, but very slowly. Utterly interesting. It almost seemed to levitate. I’d really like to know how it worked (there were also two more trees outside the pavilion that moved in the same way). Overall it was a fantastic experience.

Red Sea of Keys

My favorite installation was in the Japanese Pavilion; The Key in the Hand by Chiharu Shiota. The space was filled with an almost incomprehensible number of keys dangling from entangled red yarn suspended from the ceiling of the room. There were also a few small boats positioned around the space. My first instinct was that I was standing underneath a red sea. It’s really hard to describe just how much ‘red’ there actually is in the space. The intricacy of the threads and the uniqueness of almost every key I looked at was simply mind blowing. I think my favorite part of the exhibit was nestled in a corner of the room where an iPad sat looping a time compressed video of the installation of the work. It was uniquely satisfying to watch it play out and come together over and over. I’m not sure how to tap into that experience for exhibits in the library, but it’s something we can certainly aim for!

The Key in the Hand

The Key in the Hand

The Key in the Hand

The Key in the Hand

What’s in my tool chest

I recently, while perhaps inadvisably, updated my workstation to the latest version of OS X (Yosemite) and in doing so ended up needing to rebuild my setup from scratch. As such, I’ve been taking stock of the applications and tools that I use on a daily basis for my work and thought it might be interesting to share them. Keep in mind that most of the tools I use are mac-centric, but there are almost always alternatives for those that aren’t cross-platform compatible.

Communications

Our department uses Jabber for Instant Messaging. The client of choice for OS X is Adium. It works great — it’s light weight, the interface is intelligible, custom statuses are easy to set, and the notifications are readily apparent without being annoying.

My email and calendaring client of choice is Microsoft Outlook. I’m using version 15.9, which is functionally much more similar to Outlook Web Access (OWA) than the previous version (Outlook 2011). It seems to startup much more quickly and it’s notifications are somehow less annoying, even though they are very similar. Perhaps it’s just the change in color scheme. I had some difficulty initially with setting up shared mailboxes, but I eventually got that to work. [go to Tools > Accounts, add a new account using the shared email address, set access type to username and password, and then use your normal login info. The account will then show up under your main mailbox, and you can customize how it’s displayed, etc.]

outlook
Outlook 2015 — now in blue!

Another group that I work with in the library has been testing out Slack, which apparently is quite popular within development teams at lots of cool companies. It seems to me to be a mashup of Google Wave, Newsgroups, and Twitter. Its seems neat, but I worry it might just be another thing to keep up with. Maybe we can eventually use it to replace something else wholesale.

Project Management

We mostly use basecamp for shared planning on projects. I think it’s a great tool, but the UI is starting to feel dated — especially the skeuomorphic text documents. We’ve played around a bit with some other tools (Jira, Trello, Asana, etc) but basecamp has yet to be displaced.

Basecamp text document (I don’t think Steve Jobs would approve)

We also now have access to enterprise-level Box accounts at Duke. We use Box to store project files and assets that don’t make sense to store in something like basecamp or send via email. I think their web interface is great and I also use Box Sync to regularly back up all of my project files. It has built-in versioning which has helped me on a number of occasions with accessing older version of things. I’d been a dropbox user for more than five years, but I really prefer Box now. We also make heavy use of Google Drive. I think everything about it is great.

Another tool we use a lot is Git. We’ve got a library Github account and we also use a Duke-specific private instance of Gitorious. I much prefer Github, fwiw. I’m still learning the best way to use git workflows, but compared to other version management approaches from back in the day (SVN, Mercurial) Git is amazing IMHO.

Design & Production

I almost always start any design work with sketching things out. I tend to grab sheets of 11×17 paper and fold them in half and make little mini booklets. I guess I’m just too cheap to buy real moleskins (or even better, fieldnotes). But yeah, sketching is really important. After that, I tend jump right in to do as much design work in the browser as is possible. However Photoshop, Illustrator, and sometimes Indesign, are still indispensable. Rarely a day goes by that I don’t have at least one of them open.

Still use it everyday...
Photoshop — I still use it a lot!

With regards to media production, I’m a big fan of Sony’s software products. I find that Vegas is both the most flexible NLE platform out there and also the most easy to use. For smaller quicker audio-only tasks, I might fire up Audacity. Handbrake is really handy for quickly transcoding things. And I’ll also give a shout out to Davinci Resolve, which is now free and seems incredibly powerful, but I’ve not had much time to explore it yet.

Development

My code editor of choice right now is Atom — note that it’s mac only. When I work on a windows box, I tend to use notepad++. I’ve also played around a bit with more robust IDEs, like Eclipse and Aptana, but for most of the work I do a simple code editor is plenty.

The UI is easy on the eyes
The Atom UI is easy on the eyes

For local development, I’m a big fan of MAMP. It’s really easy to setup and works great. I’ve also started spinning up dedicated local VMs using Oracle’s Virtual Box. I like the idea of having a separate dedicated environment for a given project that can be moved around from one machine to another. I’m sure there are other better ways to do that though.

I also want to quickly list some Chome browser plugins that I use for dev work: ColorPick Eyedropper, Window Resizer, LiveReload (thanks Cory!), WhatFont, and for fun, Google Art Project.

Testing

I also make use of Virtual Box for doing browser testing. I’ve got several different versions of Windows setup so I can test for all flavors of Internet Explorer along with older incarnations of Firefox, Chrome, and Opera. I’ve yet to find a good way to test for older versions of Safari, aside from using something static like browsershots.

With regards to mobile devices, I think testing on as many real-world variations as possible is ideal. But for quick and dirty tests, I make use of the iOS Simulator and the Android SDK emulator. the iOS simulator comes setup with several different hardware configs while you have to set these up manually with the Android suite. In any case, both tools provide a great way to quickly see how a given project will function across many different mobile devices.

Conclusion

Hopefully this list will be helpful to someone out in the world. I’m also interested in learning about what other developers keep in their tool chest.

Building a Kiosk for the Edge

Many months ago I learned that a new space, The Ruppert Commons for Research, Technology, and Collaboration, was going to be opening at the start of the calendar year. I was tasked with building an informational kiosk that would be seated in the entry area of the space. The schedule was a bit hectic and we ended up pruning some of the desired features, but in the end I think our first iteration has been working well. So, I wanted to share the steps I took to build it.

Setting Requirements

I first met with the Edge team at the end of August 2014. They had an initial ‘wish list’ of features that they wanted to be included in the kiosk. We went through the list and talked about the feasibility of those items, and tried to rank their importance. Our final features list looked something like this:

Primary Features:

  • Events list (both public and private events in the space)
  • Room reservation system
  • Interactive floor plan map
  • Staff lookup
  • Current Time
  • Contact information (chat, email, phone)

Secondary Features:

  • Display of computer availability
  • Ability to report printing / scanning problems
  • Book locations
  • Scheduleable content on ‘home’ screen

Our deadline was the soft opening date of the space at the start of the new year, but with the approaching holidays (and other projects competing for time) this was going to be a pretty fast turn around. My goal was to have a functional prototype ready for feedback by mid October. I really didn’t start working on the UI side of things until early that month, so I ended up needing to kick that can down the road a few weeks, but that happens some times.

The Hardware

The Library had purchased two Dell 27″ XPS all-in-one touchscreen machines for the purpose of serving as an informational kiosk near the new/temporary main entrance of Perkins/Bostock. For various reasons, that project kept getting postponed. But with the desire to also have a kiosk in the Edge, we decided we could use one of the Dell machines for this purpose. The touch screen display is great —  very bright, reasonably accurate color reproduction, and responsive to touch inputs. It does pickup a lot of finger prints, but that’s sort of unavoidable with a glossy display. The machine seems to run a little bit hot and the fan is far from silent, but in the space you don’t notice it at all. My favorite aspect of this computer is the stand. It’s really fantastic — it’s super easy to adjust, but also very sturdy. You can position it in a variety of ways, depending on the space you’re using it in, and be confident that it won’t slip out of adjustment even under constant use. Various positions of Dell computer I think in general we’re a little wary of using consumer grade hardware in a 24/7 public environment, but for the 1.5 months it’s been deployed it seems to be holding up well enough.

The OS

The Dell XPS came from the factory with Windows 8. I was really curious about using Assigned Access Mode in the Windows 8.1, but the need to use a local (non-domain) account necessitated a clean install of 8.1, which sounds annoying, but that process is so fast and effortless, at least compared to days of Windows yore, that it wasn’t a huge deal. I eventually configured the system as desired — it auto-boots into the local account on startup and then fires up the assigned Windows app (and limits the machine only to that app).

I spent some time playing around with different approaches for a browser to use with assigned access. The goal was to have a browser that ran in a ‘kiosk’ mode in that there was no ability for the user to interact with anything outside of the intended kiosk UI — meaning, no browser chrome windows, bookmarks, etc. I also planned to use Microsoft’s Family Safety controls to limit access to URLs outside of the range of pages that would comprise the kiosk UI. I tried both Google Chrome and Microsoft IE 11 (which really is a good browser, despite pervasive IE hate), but I ended up having trouble with both of them in different ways. Eventually, I stumbled on to a free Windows Store app called KIOSK SP Browser. It does exactly what I want — it’s a simple, stripped down, full screen browser app. It also has some specific kiosk features (like timeout detection) but I’m only using it to load the kiosk homepage on startup.

The Backend

As several of the requirements necessitated data sources that live in the Drupal system that drives our main library site, I figured the path of least resistance would be to also build the kiosk interface in Drupal. Using the Delta module, I setup a version of our theme that stripped out most of the elements that we wouldn’t be using (header, footer, etc.) for the kiosk. I could then apply the delta to a small range of pages using the Context Module. The pages themselves are quite simple by and large. Screen shots of the pages in the Edge Kiosk

  • Events — I used a View to import an RSS feed from Yahoo Pipes (which combines events from our own Library system and the larger Duke system).
  • Reserve Spaces – this page loads in content from Springshare’s LibCal system using an iFrame.
  • Map — I drew a simplified map in Illustrator based architect’s floor plan , then saved it out as an SVG and added ID tags to the areas I wanted to make interactive.
  • Staff — this page loads in content from a google spreadsheet using a technique I outlined previously on Bitstreams.
  • Help — this page loads our LibraryH3LP Chat Widget and a Qualtrics email form.

The Frontend

When it comes time to design an interface, my first step is almost always to sketch on paper. For this project, I did some playing around and ended up settling on a circular motif for the main navigational interface. I based the color scheme and typography on a branding and style guide that was developed for the Edge. Edge Kiosk home page design Many years ago I used to turn my sketches into high fidelity mockups in photoshop or illustrator, but for the past couple of years I’ve tended to just dive right in and design on the fly with html/css. I created a special stylesheet just for this kiosk — it’s based on a fixed pixel layout as it is only ever intended to be used on that single Dell computer — and also assigned it to load using Delta. One important aspect of a kiosk is providing some hinting to users that they can indeed interact with it. In my experience, this is usually handled in the form of an attract loop.

I created a very simple motion design using my favorite NLE and rendered out an mp4 to use with the kiosk. I then setup the home page to show the video when it first loads and to hide it when the screen is touched. This helps the actual home page content appear to load very quickly (as it’s actually sitting beneath the video). I also included a script on every page to go to the homepage after a preset period on inactivity. It’s currently set to three minutes, but we may tweak that. Video stills of attract loop All in all I’m pleased with how things turned out. We’re planning to spend some time evaluating the usage of the kiosk over the next couple of months and then make any necessary tweaks to improve user experience. Swing by the Edge some time and try it out!

Assembling the Game of Stones

Back in October, Molly detailed DigEx’s work on creating an exhibit for the Link Media Wall. We’ve finally finalized our content and hope to have the new exhibit published to the large display in the next week or two. I’d like to detail how this thing is actually put together.

HTML Code

In our planning meetings the super group talked about a few different approaches for how to start. We considered using a CMS like WordPress or Drupal, Four Winds (our institutional digital signage software), or potentially rolling our own system. In the end though, I decided to build using super basic HTML / CSS / Javascript. After the group was happy with the design, I built a simple page page framework to match our desired output of 3840 x 1080 pixels. And when I mean simple, I mean simple.

got_assembly

I broke the content chunks into five main sections: the masthead (which holds the branding), the navigation (which highlights the current section and construction period), the map (which shows the location of the buildings), the thumbnail (which shows the completed building and adds some descriptive text), and the images (which houses a set of cross-fading historic photos illustrating the progression of construction). Working with a fixed-pixel layout feels strange in the modern world of web development, but it’s quick and satisfying to crank out. I’m using the jQuery Cycle plugin to transition the images, which is lightweight and offers lots of configurable options. I also created a transparent PNG file containing a gradient that fades to the background color which overlays the rotating images.

Another part of the puzzle I wrestled with was how to transition from one section of the exhibit to another. I thought about housing all of the content on a single page and using some JS to move from one to the next, but I was a little worried about performance so I again opted for the super simple solution. Each page has a meta refresh in the header set to the number of seconds that it takes to cycle through the corresponding set of images and with a destination of the next section of the exhibit. It’s a little clunky in execution and I would probably try something more elegant next time, but it’s solid and it works.

Here’s a preview of the exhibit cycling through all of the content. It’s been time compressed – the actual exhibit will take about ten minutes to play through.

In a lot of ways this exhibit is an experiment in both process and form, and I’m looking forward to seeing how our vision translates to the Media Wall space. Using such simple code means that if there are any problems, we can quickly make changes. I’m also looking forward to working on future exhibits and helping to highlight the amazing items in our collections.