Setting up Shibboleth + Ubuntu 14 + Drupal 7 on AWS with Virginia.edu integration

We’ve recently begun moving to amazon web services for hosting, however we still need to authenticate through ITS who handles the central SSO Authentication services for Virginia.edu.  In previous posts we looked at Pubcookie aka Netbadge - however Pubcookie is getting pretty long in the tooth (it’s last release back in 2010) and we are running Ubuntu 14 with Apache 2…. integrating pubcookie was going to be a PITA…. so it was time to look at Shibboleth – an Internet2  SSO standard that works with SAML  and is markedly more modern than pubcookie – allowing federated logins between institutions etc…

A special thanks to Steve Losen who put up with way more banal questions than anyone should have to deal with… that said, he’s the man :)

Anyhow – ITS does a fine job at documenting the basics - http://its.virginia.edu/netbadge/unixdevelopers.html.  Since we’re using ubuntu the only real difference is that we used apt-get

Here’s the entire install from base Ubuntu 14

apt-get install apache2 mysql-server php5 php-pear php5-mysql php5-ldap libapache2-mod-shib2 shibboleth-sp2-schemas drush sendmail ntp

 

Apache Set up

On the Apache2 side  we enabled some modules and the default ssl site

a2enmod ldap rewrite  shib2 ssl
a2ensite default-ssl.conf

Back on the apache2 side here’s our default SSL Screen Shot 2015-11-16 at 10.18.33 AM

<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerAdmin webmaster@localhost
ServerName bioconnector.virginia.edu:443
DocumentRoot /some_web_directory/bioconnector.virginia.edu
<Directory /some_web_directory/dev.bioconnector.virginia.edu>
AllowOverride All
</Directory>

SSLEngine on

SSLCertificateFile /somewheresafe/biocon_hsl.crt
SSLCertificateKeyFile /somewheresafe/biocon_hsl.key

<Location />
AuthType shibboleth
ShibRequestSetting requireSession 0 ##This part meant that creating a session is possible, not required
require shibboleth
</Location>

the location attributes are important – if you don’t have that either in the Apache conf you’ll need it in an .htaccess in the drupal directory space

Shibboleth Config

The Shibboleth side confused me for a hot minute.

we used  shib-keygen as noted in the documentation to create keys for shibboleth and ultimately the relevant part of our /etc/shibboleth/shibboleth2.xml looked like this

<ApplicationDefaults entityID=”https://www.bioconnector.virginia.edu/shibboleth”
REMOTE_USER=”eppn uid persistent-id targeted-id”>

<Sessions lifetime=”28800″ timeout=”3600″ relayState=”ss:mem”
checkAddress=”false” handlerSSL=”true” cookieProps=”https”>
<!–we went with SSL Required – so change handlerSSL to true and cookieProps to https

<SSO entityID=”urn:mace:incommon:virginia.edu”>
SAML2 SAML1
</SSO>
<!–this is the production value, we started out with the testing config – ITS provides this in their documentation–>

<MetadataProvider type=”XML” file=”UVAmetadata.xml” />
<!–Once things are working you should be able to find this at https://www.your-virginia-website.edu/Shibboleth/Metadata – it’s a file you download from ITS = RTFM –>
<AttributeExtractor type=”XML” validate=”true” reloadChanges=”false” path=”attribute-map.xml”/>
<!–attribute-map.xml is the only other file you’re going to need to touch–>

<CredentialResolver type=”File” key=”sp-key.pem” certificate=”sp-cert.pem”/>
<!–these are the keys generated with shib-keygen –>
<Handler type=”Session” Location=”/Session” showAttributeValues=”true”/>
<!–During debug we used https://www.bioconnector.virginia.edu/Shibboleth.sso/Session with the  showAttributeValues=”true” setting on to see what was coming across from the UVa  Shibboleth IdP–>

/etc/shibboleth/attribute-map.xml looked like this

<Attribute name=”urn:mace:dir:attribute-def:eduPersonPrincipalName” id=”eppn”>
<AttributeDecoder xsi:type=”ScopedAttributeDecoder”/>
</Attribute>

<Attribute name=”urn:mace:dir:attribute-def:eduPersonScopedAffiliation” id=”affiliation”>
<AttributeDecoder xsi:type=”ScopedAttributeDecoder” caseSensitive=”false”/>
</Attribute>
<Attribute name=”urn:oid:1.3.6.1.4.1.5923.1.1.1.9″ id=”affiliation”>
<AttributeDecoder xsi:type=”ScopedAttributeDecoder” caseSensitive=”false”/>
</Attribute>

<Attribute name=”urn:mace:dir:attribute-def:eduPersonAffiliation” id=”unscoped-affiliation”>
<AttributeDecoder xsi:type=”StringAttributeDecoder” caseSensitive=”false”/>
</Attribute>
<Attribute name=”urn:oid:1.3.6.1.4.1.5923.1.1.1.1″ id=”unscoped-affiliation”>
<AttributeDecoder xsi:type=”StringAttributeDecoder” caseSensitive=”false”/>
</Attribute>

<Attribute name=”urn:mace:dir:attribute-def:eduPersonEntitlement” id=”entitlement”/>
<Attribute name=”urn:oid:1.3.6.1.4.1.5923.1.1.1.7″ id=”entitlement”/>

<Attribute name=”urn:mace:dir:attribute-def:eduPersonTargetedID” id=”targeted-id”>
<AttributeDecoder xsi:type=”ScopedAttributeDecoder”/>
</Attribute>

<Attribute name=”urn:oid:1.3.6.1.4.1.5923.1.1.1.10″ id=”persistent-id”>
<AttributeDecoder xsi:type=”NameIDAttributeDecoder” formatter=”$NameQualifier!$SPNameQualifier!$Name” defaultQualifiers=”true”/>
</Attribute>

<!– Fourth, the SAML 2.0 NameID Format: –>
<Attribute name=”urn:oasis:names:tc:SAML:2.0:nameid-format:persistent” id=”persistent-id”>
<AttributeDecoder xsi:type=”NameIDAttributeDecoder” formatter=”$NameQualifier!$SPNameQualifier!$Name” defaultQualifiers=”true”/>
</Attribute>
<Attribute name=”urn:oid:1.3.6.1.4.1.5923.1.1.1.6″ id=”eduPersonPrincipalName”/>
<Attribute name=”urn:oid:0.9.2342.19200300.100.1.1″ id=”uid”/>
</Attributes>

Those two pieces marked in red are important – they’re going to be the bits that we pipe in to Drupal

For  debugging we used the following URL https://www.bioconnector.virginia.edu/Shibboleth.sso/Session to see what was coming across – once it was all good we got a response that looks like

Miscellaneous
Session Expiration (barring inactivity): 479 minute(s)
Client Address: 137.54.59.201
SSO Protocol: urn:oasis:names:tc:SAML:2.0:protocol
Identity Provider: urn:mace:incommon:virginia.edu
Authentication Time: 2015-11-16T15:35:39.118Z
Authentication Context Class: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
Authentication Context Decl: (none)

Attributes
affiliation: staff@virginia.edu;employee@virginia.edu;member@virginia.edu
eduPersonPrincipalName: adp6j@virginia.edu
uid: adp6j
unscoped-affiliation: member;staff;employee

The uid and eduPersonPrincipalName variables being the pieces we needed to get Drupal to set up a session for us

Lastly the Drupal bit

The Drupal side of this is pretty straight

We installed Drupal as usual  and grabbed the shib_auth module.

Screen Shot 2015-11-16 at 10.46.10 AM Screen Shot 2015-11-16 at 10.47.51 AM

 

and on the Advanced Tab

Screen Shot 2015-11-16 at 10.48.58 AM Screen Shot 2015-11-16 at 10.50.07 AM

executing an r script with bash

Here’s a tangent:

Let’s say you need to randomly generate a series of practice exam questions. You have a bunch of homework assignments, lab questions and midterms, all of which are numbered in a standard way so that you can sample from them.

Here’s a simple R script to run those samples and generate a practice exam that consists of references to the assignments and their original numbers.

## exam prep script

## build hw data
j <- 1

hw <- data.frame(hw_set = NA, problm = seq(1:17))

for (i in seq(1:12)) {
        hw[j,1] <- paste0("hw",j)
        j <- j+1
}

library(tidyr)

hw <- expand(hw)

names(hw) <- c("problm_set", "problm")

## build exam data

j <- 1

exam <- data.frame(exam_num = NA, problm = seq(1:22))

for (i in seq(1:8)) {
        exam[j,1] <- paste0("exam",j)
        j <- j+1
}

library(tidyr)

exam <- expand(exam)

names(exam) <- c("problm_set", "problm")

## create practice exam

prctce <- rbind(exam,hw)

prctce_test <- prctce[sample(1:nrow(prctce), size=22),]

row.names(prctce_test) <- 1:nrow(prctce_test)

print(prctce_test)

As the last line indicates, the final step of the script is to output a prctce_test … that will be randomly generated each time the script is run, but may include duplicates over time.

output from r script

Sure. Fine. Whatever.

Probably a way to do this with Drupal … or with Excel … or with a pencil and paper … why use R?

Two reasons: 1) using R to learn R and 2) scripting this simulation let’s you automate things a little bit easier.

In particular, you can use something like BASH to execute the script n number of times.

for n in {1..10}; do Rscript examprep.R > "YOUR_PATH_HERE/practice${n}.txt"; done

That will give you 10 practice test txt files that are all named with a tokenized number, with just one command. And of course that could be written into a shell script that’s automated or processed on a scheduler.

automatically generated practice tests with bash and r script

Sure. Fine. Whatever.

OK. While this is indeed a fairly underwhelming example, the potential here is kind of interesting. Our next step is to investigate using Drupal Rules to initiate a BASH script that in turn executes an algorithm written in R. The plan is to also use Drupal as the UI for entering the data to be processed in the R script.

Will document that here if/when that project comes together.

equipment booking system — simplify(ing) comments

We don’t have a lot of feedback about how our patrons are using the current equipment booking system. There may be information that users could share with one another (and the library) if given a mechanism to do so. So as part of the new booking system implementation in Drupal, we set a task of including a commenting feature. Each reservable piece of equipment stands alone as a node so all we have to do is turn on commenting, right?

Basically.

But there are a couple of things that are worth noting about that.

If you’re enabling comments on a content type, it’s probably a good idea to consider who can view (and post comments to) that content. That’s all in the permissions table.

In our scenario, we didn’t want unauthenticated comments and we didn’t want to restrict the individual equipment pages (e.g. the page for iPad copy 2) to any kind of login. The request to reserve equipment from that page would trigger the login.

The snippet from the permissions table below shows how we adjusted the comment access. Note that these will be permissions that will apply anywhere else on we’re using comments on our site … we’re not currently, but if we do in the future we’re fine with this access level.

permissions table for comment settings toggled on for authenticated users

Once authenticated, the comment form defaults to give users a text format selection option. There are advantages to users selecting a WYSIWYG format This too can be handled in the text format configurations or even the permissions table. An easier way is with the Simplify module.

Simplify gives you an interface to hide a bunch of stuff that may be noisy to users adding content — publishing options, path settings, etc.

And for comments it lets you hide text formats.

The finished product:

comment box without text format

equipment booking system — managing reservation conflicts with field validation and EntityFieldQuery()

The first requirement of a registration system is to have something to reserve.

The second requirement of a registration system is to manage conflicting reservations.

Setting up validation of submitted reservations based on the existing reservation nodes was probably the most complex part of this project. A booking module like MERCI has this functionality baked in — but again, that was too heavy for us so we had to do it on our own. We started off on a fairly thankless path of Views/Rules integration. Basically we were building a view of existing reservations, contextually filtering that view by content id (the item that someone was trying to reserve) and then setting a rule that would delete the content and redirect the user to a “oops that’s not available” page. We ran into issues building the view contextually with rules (for some reason the rule wouldn’t pass the nid …) and even if we would have got that wired up, it would have been clunky.

Scrap that.

On to Field Validation.

The Field Validation module offers client-side form validation (not to be confused with Clientside Validation or Webform Validation based on any number of conditions at the field level. We were trying to validate the submitted reservation on length (no longer than 5 days) and availability (no reservations of the same item during any of the days requested).

The length turned out to be pretty straightforward — we set the “Date range2″ field validator on the reservation date field. The validator lets you choose “global” date format, which means you can input logic like “+ X days” so long as it can be converted by the strtotime() function.

field validation supports global date values, which means anything that the strtotime() PHP function can convert

 

Field Validation also gives you configurations to bypass the validation criteria by role — this was helpful in our case given that there are special circumstances when “approved” reservations can be made for longer than 5 days. And if something doesn’t validate, you can plug in a custom error message in the validator configuration.

bypass length restriction so staff can accommodate special cases for longer reservations, and field validation also lets you write a custom error message

 

With the condition set for the length of the reservation, we could tackle the real beast. Determining reservation conflicts required us to use the “powerfull [sic] but dangerous” PHP validator from Field Validation. Squirting custom code into our Drupal instance is something we try to avoid as much as possible — it’s difficult to maintain … and as you’ll see below it can be difficult to understand. To be honest, a big part of the impetus for writing this series of blog posts was to document the 60+ lines of code that we strung together to get our booking system to recognize conflicts.

The script starts by identifying information about the item that the patron is trying to reserve  (item = $arg1, checkout date = $arg2, return date = $arg3) and then builds an array of dates from the start to finish of the requested reservation. Then we use EntityFieldQuery() to find all of the reservations that have dates less than or equal to the end date request. That’s where we use the fieldCondition() with <= to the $arg3_for_fc value. What that gives us is all of the reservations on that item that could possibly conflict. Then we sort by descending and trim the top value out of the list to get the nearest reservation to the date requested. With that record in hand, we can build another array of start and end dates and use array_intersetct() to see if there is any overlap.

I bet that was fun to read.

I’ll leave you with the code and comments:

<?php
//find arguments from nid and dates for the requested reservation
$arg1 = $this->entity->field_equipmentt_item[und][0][target_id];

$arg2 = $this->entity->field_reservation_date[und][0][value];
$arg2 = new DateTime($arg2);

$arg3 = $this->entity->field_reservation_date[und][0][value2];
$arg3 = new DateTime($arg3);
$arg3_for_fc = $arg3->format("Ymd");


//build out array of argument dates for comparison with existing reservation
$beginning = $arg2;
$ending = $arg3;
$ending = $ending->modify( '+1 day' );

$argumentinterval = new DateInterval('P1D');
$argumentdaterange = new DatePeriod($beginning, $argumentinterval ,$ending);

$arraydates = array();
foreach($argumentdaterange as $argumentdates){
 $arraydates []= $argumentdates->format("Ymd");  
}

//execute entityfieldquery to find the most recent reservation that could conflict

$query = new EntityFieldQuery();

$fullquery = $query->entityCondition('entity_type', 'node')
  ->entityCondition('bundle', 'reservation')
  ->propertyCondition('status', NODE_PUBLISHED)
  ->fieldCondition('field_equipmentt_item', 'target_id', $arg1, '=')
  ->fieldCondition('field_reservation_date', 'value', $arg3_for_fc, '<=')
  ->fieldOrderBy('field_reservation_date', 'value', 'desc')
  ->range(0,1);

$fetchrecords = $fullquery->execute();

if (isset($fetchrecords['node'])) {
  $reservation_nids = array_keys($fetchrecords['node']);
  $reservations = entity_load('node', $reservation_nids);
}

//find std object for the nearest reservation from the top
$reservations_test = array_slice($reservations, 0, 1);

//parse and record values for dates
$startdate = $reservations_test[0]->field_reservation_date[und][0][value];
$enddate = $reservations_test[0]->field_reservation_date[und][0][value2];

//iterate through to create interval date array
$begin = new DateTime($startdate);
$end = new DateTime($enddate);
$end = $end->modify( '+1 day' );

$interval = new DateInterval('P1D');
$daterange = new DatePeriod($begin, $interval ,$end);

$arraydates2 = array();
foreach($daterange as $date){
 $arraydates2 []= $date->format("Ymd");  
}

$conflicts = array_intersect($arraydates, $arraydates2);

if($conflicts != NULL){
$this->set_error();
}
?>

equipment booking system — content types

For our equipment booking system we needed two kinds of content types: reservation and equipment. Patrons would create nodes of the reservation content type. Staff would create nodes of the equipment content type(s). Because our library circulates a lot of different equipment we need a bunch of content types — one for each: Apple TV, Laptop, iPad Air, Digital Video Camera, Projector, etc. The equiment content types all need the same field structure — title, description, image, accessories and equipment category. It’s tedious creating these individually, but once you get one fully fleshed out (and the skeletons of the rest in place) then the Field Sync module will finish the job with the push of a button.

field sync screenshot

The reservation content type would be canonical for each kind of equipment. In other words, we don’t have to create a Reservation_iPad and a Reservation_Laptop and a Reservation_Projector, etc. There’s just one: Reservation. The way we accomplish this is by using an entity reference field that looks for any nodes of equipment content types.

entity reference configurations for reservation content type

When a patron creates a node of the reservation content type, he/she will select the equipment to be reserved in the entity reference field. This entity reference structure allows us to offer a pretty helpful feature for patrons navigating from a specific equipment page to the reservation form. An Entity Reference Prepopulate and Display Suite combination gives us a “Reserve This Item” link on equipment pages (say Apple TV – Copy 1) that sends the patron to a node/add reservation page that has that piece of equipment (in this case Apple TV – Copy 1) selected in the equipment field.

entity reference prepopulate with reservation link

There’s good documentation out there for Entity Reference Prepopulate — check out this video. But it might be worth explaining how we built the URL that prepoluates the entity reference field. With Display Suite you can create a custom code field — we hardcode a URL to the node/add reservation form, with a token for the id of the currently viewed entity appended onto it. When the user clicks the link, the Entity Reference Prepopulate module kicks off, sees the id and builds the form with that entity referenced.

equipment booking system — background

This is the first in what’s going to be a series of posts documenting our equipment booking system project. We’re developers working at a library that circulates equipment (laptops, tablets, cameras, etc.) — and we’re sick of maintaining the custom PHP application that manages the reservation process. So we built the whole thing into our existing Drupal site. I say “built” because it’s done … or at least sitting on the production server waiting for content to be entered. We’re doing the documentation after the fact, so I’ll try to pick and choose what’s worth putting out there. I’m guessing that will boil down to plugging a few modules and spending way too much time writing about the PHP script we used to check for reservation conflicts. We’ll see.

The beginning of the project was deciding whether or not we wanted to use a module to manage the reservation process. Actually the beginning was MERCI— we got a little turned around on this one … picked the module and pitched it before we had everything specd out. Once we dug in, MERCI turned out to be a reasonable module but just a little heavier than what we needed. In particular, the “bucket and “resource” model was too much and it was kind of a pain to manage without being able to get into the field configurations. We also tested out Commerce Stock for its inventory tools. Way heavier than MERCI.  To use Commerce Stock we would have to install Commerce and everything that comes with it. Rather than ripping things out that we weren’t going to use (or adding more to our already overstuffed stack) we decided to build the whole thing with content types, rules and views.

No problem right?

two new drupal distros – one for voting, one for 3d printing e-commerce

Two new drupal distributions available on githubfrong

** https://github.com/alibama/cvillecouncilus is the distribution behind https://www.cvillecouncil.us - it’s an attempt to run a political campaign through a virtual proxy…

** https://github.com/alibama/rapid-prototyping-ecommerce-drupal – this is the code behind http://rpl.mae.virginia.edu/ it’s an e-commerce solution for 3d printing… A lot of this is implemented in rules and other well-standardized code thanks to Joe Pontani - a talented developer here in Virginia.  Joe integrated several third party tools, and set up the UVa payment gateway through Nelnet.

Both sites are getting updates over the next few months – the Charlottesville Council website also has a drupalgap implementation on it – absolutely awesome toolset…

18F API compliance is another feature I’m pretty stoked about… I got most of that done with the oauth2 server, views datasource, services and a couple of great notification features done with rules + views  i’ll get that feature out asap = it’s really convenient – matching a profile2 taxonomy field onto content taxonomy fields for notifications with new content.

any questions – please drop a line in the comments below

scheduling opening hours in advance

Our library is always open…*almost always.  Things happen (like Thanksgiving, Christmas, New Years) and sometimes we even have extended hours and we stay open even longer. Altogether, we have ~ 14 weeks/year that have “non-standard” hours.

In the past, we had to manually managed our weekly hours by updating a single, static piece of panel content.

wysywighours

You can probably imagine the problems we had trying to maintain this accurately – every week that we had deviated hours the desk supervisor had to submit a request for us to change them. We might be busy with other projects, might not get to it right away, the hours on the home page might be wrong, the desk supervisor might have to submit another request, and then pretty soon they might have to ask us to change it back to the regular hours…

Too much hassle.

We knew we needed dynamic content, and because our homepage was built as panel page, we were ready to pump a view into the hours pane. The question was how to set up the content so that we wouldn’t have to touch it…at all.

We landed on a combination of the office hours and scheduler modules.

Office hours gives us an “Office hours” field type. Since we want to display hours on our home page in weekly chunks, we create a “Weekly Hours” content type and add that field.

weekly hours content type

And we set the default values for this field to the library’s standard opening hours – usually when we have a week with weird hours, it’s only going to affect a couple of days so this helps cut down on data entry.

default office hours field settings

Then the scheduler module kicks in – the content type must be configured to allow nodes to be scheduled for publication. Check a couple boxes and start adding content.

scheduler options of content type

Since the goal here is for us not have to touch the hours (at all) we enlisted the desk supervisor to enter the data – with a couple of tweaks to the permissions table and a 5 minute demo (add node >> toggle hours >> set publish date for sunday evening before given week >> set unpublish date for sunday evening of given week >> click save) he was good to go. In all, he added 10 “Weekly Hours” nodes, including one with the library’s standard opening hours, because that was as far in advance as the library had planned its opening hours.

scheduling data

With the content added, we were able to start manipulating views so that we could display it on the homepage. Since all the nodes (except for the one with standard hours) had an unpublish date, we could configure a view to output only nodes of the type “Weekly Hours” that are published, sorted by the post date descending, displaying one at a time. That way, when a week of non-standard hours is published, it’s displayed in place of the standard hours, which were published previously but are never unpublished.

view for current hours

By creating a block display and setting it to only output the “Office hours” field, we should be all set. If there’s any simple formatting, re-ordering, etc. that you want to do to the hours, you should check in the field settings in the view. The office hours module has substantial integration with views, and gives you a lot of flexibility in how you want to display the hours.

highly flexible field settings for office hours

After that, we view block into a panel and it takes care of itself – here’s the final result on our homepage:

finalhours

One cool feature that we may implement for mobile is the current hours display. In the view configuration for the office hours field we can set the number of days to show – by setting only the next open day to be displayed, you’ll get a view that can be formatted something like, “The library is open today from: [*]”

current hours today display office hours module

 

developing a ticketing system to process website change requests (aka you’ve got mail)

Ever since we launched our site re-design, we’ve had a pretty steady flow internal change requests – add links, change fonts, re-style buttons, adjust layouts, modify permissions, etc.

We were trying to process these requests through a ticketing system that we built on our staff intranet in Drupal 6 – it basically allowed us to toggle tickets between open/closed and send emails when we were finished. With requests piling up (some of which overlapped with one another and/or were vaguely articulated and/or were out of our control anyways) this system wasn’t cutting it. We decided to scrap it and develop a new workflow in Drupal 7 with three questions in mind:

1) How can the requestor  clearly communicate what it is they actually want us to do
2) How can we clearly communicate what it is we’re actually doing
3) And how can we, as the general put it, start “hitting a small nail with an awfully big hammer”

The first two issues were solved easily enough: we enabled comments on the “Web Request” content type. When someone creates a node of that type, we can sniff out what they need with some probing in the comments area. The comments allow for transparent dialogue, so long as you have an effective method for distribution.

Enter the awfully big hammer…

Using a combination of rules, views, display suite and user reference fields, we gave node authors the ability to create email lists specific to the requests they create. With the click of a button they can message as many of their colleagues as they choose, giving them the gift of email at every step of the request.

Here’s how we did it:

First we created a form – we set up a content type for “Web Requests” – we were going to use entityforms but couldn’t since we needed commenting. We added fields for the data we wanted to collect (request title, details, link, and screenshots). We also created administrative fields (for things like UX data, project tagging, etc.) that are only visible to the web team, and therefore don’t appear on the node/add form that requestors use.

requestform

The most important field (for the email list) is the user reference field – what we termed “Stakeholders” is a selection of users of a given role formatted as checkboxes, with no restriction on the number of values.

users

The stakeholders appear on the form listed by name – that works for the users filling out the form, but not for rules, which will need their email addresses. Using display suite, we can configure a custom output for the stakeholders field when it is used as a token. In the “Manage Display” screen, you can specify custom display settings. Since we’ll be using tokens in the rules and views configurations we want to customize that to get the user’s email.

displaysuiteconfig1

So the token display for the field needs to output [user:mail]

displaysuiteconfig2

With the content type and fields configured, we’re ready for rules.

We want “Stakeholders” to be notified when a new request is created. So, we add a rule that reacts on saving new content and triggers an email message. By adding a condition that states that the content has the stakeholder field, we can get the token (mentioned above) and use that as one of the recipients for the email. But the email action will only pass through one of the values. That means only one person will be emailed, and that’s not good enough – we need everyone getting these emails. Rules allows us to loop actions, but not without a view that contextually displays content, which in this case is a list of stakeholder email address.

So we create a new view of content – the only field we need is a stakeholders field that has a custom formatter of tokenized text and outputs [user:mail]

viewsfield

The next step is to set a contextual filter that acts on the content node id (nid) – this should generate a list of stakeholder email address per node. Preview it to make sure by passing the view a nid.

viewsruledisplay

In order to call the view in rules, we have to give it a rules display. This also allows you to specify row variables – the important thing in this case is that we use the rendered result and the data type is text.

viewsrulevariable

With the view in place, we can go back to the rule we created and call it as a views loop.

viewsloop

Add the email action to that loop and you’re set.

rule-2

Each event that triggers an email will have its own rule, but you can use the same view/view loop structure for all of them, except comments. To get the stakeholder addresses to work in an email that’s set off by a comment, you have to create another view. This will be a view of comments that is configured almost identically to the stakeholder view, but with a relationship that joins comments to content so that you can use the nid filter.

commentsview-2

panel permissions for content editors

Over the past few months our site has undergone a total re-design // upgrade from Drupal 6 => 7.   The priority was to rebuild functionality and migrate content quickly and cleanly – permissions schemes and editing privileges for our content contributors took a backseat. Now that the site has been launched and stabilized, we’ve begun to look at some of the tools we used to try to figure out how and if we can train our librarians to use them.

On our old site, we made pretty heavy use of blocks (particularly for sidebar items) – since we had to rebuild these pieces of content anyways, we tried to put them in more flexible containers.  We started looking at panels – we found Panopoly.  This tool worked really well for us.  We could (and did) use panel pages, custom panel layouts, views as panel panes, mini panels as blocks, blocks as mini panels, etc.  But when we started to turn over content editing responsibilities to our librarians, we discovered that the default settings were way too powerful for what they needed to do.  The interface was overwhelming and privileges were set too high – our content editors had too many options.  We had to scale back.

toomuch

The first step was to lock down Panelizer privileges on the home page – we were clued into this one when one of the librarians told us that she was seeing a “Change Layout” button on the site’s front page.  That meant she (or any other content editor) could have changed the layout of the home page with two button clicks.  Not good.

We probably could have done this a few different ways – we chose to change the renderer of the home page (built as a panel page) from “In-Place Editor” to “Standard”. Of course this means that we (admins) can’t use the groovy Panelizer interface when we want to edit content on the home page – but that’s cool since we know that those content regions are mini-panels and can be edited elsewhere.

homepagerenderer

That took care of the home page, but the librarians were still seeing too many options on the other pages (see first screenshot) – we could get rid of the In-Place Editor on all pages, but we’d have to make those configurations on each panel page (or page that had been panelized) and we would lose the slick, drag-and-drop editing interface. So we hit the permissions table.

We found that the permissions were set way too high. In the screenshots that follow, you’ll see what we left on – note that we have 11 roles in the table. The 3rd column from the left is admin, the 4th is editor (librarians) and the last one on the right is portal manager, which we made for development purposes. When we apply these changes to the production site, we’ll set editor privileges to the same as the portal manager on development. So just pay attention to the one on the far right – these are the only permissions we need for our use case: giving librarians permission to to edit layout and add panel content to basic web pages.

permissionstable2 permissionstable3 permissionstable1 permissionstable7

All other panel, panel pane, panelizer, etc. privileges in the table need to be locked down. Note that some of the permissions we turned on were specific to a content type (our “Web Page” content) and that this will vary depending on your needs.

Restricting these permissions reduces access to the editing interface.  Our librarians will no longer see “gear” buttons – they’ll only see “Customize This Page” and “Change Layout” .

buttons

But when they click “Customize This Page” and try to change panel content, they still get bombarded with too many editing options (see the first screenshot) – we can fix that. The Panelizer configuration allows you to adjust settings for allowed content per content authoring method.

allowedcontent1

Since we’re locking down Panelizer settings for the “Web Page” content type, that’s where we’re headed in the configuration table.

allowedcontent2

This is where we want to be – lots of boxes to uncheck, lots of buttons to push.

That’s cool – all our librarians need to do is add lists of links, images and text, and (ideally) to be able to reuse the content they create elsewhere.

Here are the settings we used:

allowedcontent3 allowedcontent4 allowedcontent5

The result?  Content editors can now…

…choose layout:

editing1

…add, edit, move, delete content to the regions within this layout:

editing2

…add links, images, text or reusable content:

editing3

Note that if they do want to reuse content, they have to specify that in the editor:

editing4