I accidentally a word.

Sep
20

OpenStack Documentation Bootcamp 2013

ID-10056978

Image courtesy of anankkml / FreeDigitalPhotos.net

A colleague and I recently had the opportunity to attend the OpenStack Documentation Bootcamp, hosted by Mirantis in their San Francisco office.  The event was provided as an opportunity for new and existing contributors to learn about the tools and processes of the documentation project. Additional time was also allocated to discussing the direction of the project and the status of the various guides being worked on for the OpenStack Havana release. Mirantis and Rackspace provided meals for the duration of the event, and several attendees also had their costs generously met by the OpenStack Foundation.

The event started out with a welcome from Anne Gentle, documentation project leader. Anne also provided the 20-something attendees an overview of the current state of the project before we dove into heavier topics such as project tooling and automation.Nick Chase also gave new contributors a very thorough walk through of the steps involved in joining the project later in the day.

Two particular highlights for me were:

  • Jim Blair providing us with an understanding of the tools available for adding to and manipulating continuous integration workflows within the OpenStack project. There appear to be many opportunities for us to add simple jobs to these workflows to assist with applying higher standards to our content before it is merged into the repository.
  • David Cramer demonstrating the use of clouddocs-maven and oXygen to create DocBook olink elements linking between documents. I still am not sure how we might be able to unlock this functionality in Publican in a way that scales to larger sites but I certainly have a much better understanding of the problem space and how we might handle conversion of such content to something Publican can build in the near future.

While thoroughly enjoying all of the sessions I also took a particular interest in the work Diane and Tom have been doing on automated generation of API and reference documentation respectively. These are both great initiatives ensuring thorough documentation of the various options available while allowing authors to concentrate on writing the “glue” in the form of concepts and tasks.

I myself also had the opportunity to present a brief introduction to and overview of Publican and the work I have been doing on tools to assist with using it to build OpenStack content. This was warmly received which was very encouraging. For the purposes of demonstration I presented a version of the OpenStack User Guide that had been built using Publican.

I also received a lot of positive feedback regarding the “Report a Bug” links included in some (but not yet all) Red Hat OpenStack content. There seems to be a lot of interest in enabling similar functionality for the wider project using Launchpad either in addition to or instead of the existing Disqus functionality.

All in all I feel that the event was extremely beneficial both to myself and the wider OpenStack documentation community with many of us already squirrelled away working on the tasks we discussed over the two days. No doubt though if or when there is another opportunity for a documentation bootcamp of this nature it will be even bigger and better!

Presentations

Write-ups

 

Jun
17

Open Help 2013 Wrap Up

As per my previous post over the weekend I had the pleasure of attending the first two days of the Open Help conference in Cincinnati, Ohio. Thanks again to the conference sponsors Mozilla, Github, WordPress, and – my own employer – Red Hat as well as Shaun McCance for his tireless organization. Rather than provide a brain dump quite as detailed as I did for day one I am going to provide a full wrap up in this post covering both days I was in attendance.

Presentations

Siobhan McKeown has provided some excellent notes from the formal presentation track so I will provide those links here before diving into my own takeaways and key themes:

In addition to the above:

  • Siko Bouterse and Jake Orlowitz from the Wikimedia Foundation presented the Wikipedia Teahouse, an effort to create a more welcoming space for new editors with a view to ultimately “building better wikipedians”.
  • Warren Block presented on the current documentation processes, and improvement focuses, of the FreeBSD project which turns 20 this year.

Time was also provided for some more informal demonstrations and discussions, which I will get to in a moment.

Takeaways

My key takeaways and thoughts from the presentations were:

  • StackExchange style question and answer sites continue to be a very hot topic:
    • Whether you like it or not, your product/project has a presence on a StackExchange site or clone. You can choose not to engage, but you have to be happy with the advice that leads to.
    • StackExchange style sites do not replace formal documentation but provide an on-ramp to it which also happens to have a very high search engine ranking. Whenever possible cross link the formal documentation in your answers.
    • When launching a StackExchange style site it’s crucial to have early engagement, preferably from intermediate users. Those who are not expert enough they don’t have any questions, but also not too “fresh” to be able to answer any. This is a key issue I see with ask.fedoraproject.org and ask.openstack.org where the lack of that kind of engagement push has (in my opinion) resulted in a trickle of questions, many of which either go unanswered or don’t receive up/down votes.
  • Metrics to die for:
    • The majority of the presentations were able to cite very solid metrics illustrating how site or program usage changed in reaction to various changes in particular breaking down how content is being accessed and used (not just when and from where as has traditionally been the case). The presentations of Michael Verdi (Mozilla) and Jorge Castro (Canonical) were particularly convincing in this regard.
    • A lot of the documentation demonstrated was of the troubleshooting/knowledge base variety – short, sharp articles. Usually these had at least a “like” or “I found this helpful button”, if not comments. Integration of comment systems like Disqus in more formal documentation is also getting very common. Consensus was that if you do this you need someone dedicated to handle curating the comments (in particular turning those that are appropriate into bugs) and getting rid of spam, but that it was worth it to remove the barriers for users wanting to report documentation issues who will jump through very few hoops.
  • On screen tutorials really taking off:
    • Several of the presentations included guided tutorials that interacted with the elements of the UI itself to walk users through the steps rather than defer to an external guide. This functionality was really elegantly implemented – would love to see something like this in some of the web-based interfaces I frequent like the oVirt’s Administration Portal and OpenStack’s Horizon.
    • Technologies used included Walkthrough and Selenium.
  • Communities often unwilling to delete or archive old content, regardless of current correctness.
    • Presenting a large maintenance burden for several of the groups present.
  • There is just some outright cool stuff happening in user help and documentation, at some level it was simply reinvigorating to get to the fire hose and “fill up” on ideas.

Demonstrations

  • Jorge Castro from Canonical came back to demonstrate discourse which is an effort by the same people behind StackExchange to revolutionize the web forum. I was already aware of it but none the less Jorge is an entertaining and energizing speaker so it was interesting to get his first impressions. On face value it certainly seems like a lot of thought and effort has been put into it to make it an immediate improvement on the existing options for forum software, and it’s all delicious open source.
  • Shaun McCance demoed screencasts with translatable subtitles, sweet! The screencasts themselves were in WebM format with the subtitles, and timing, defined in the TTML format for defining timed text (a W3C standard). This was very, very, cool – I want the ability to do this yesterday. He also showed off some of the new “getting started” videos that appear in more recent versions of GNOME (I had seen this in Fedora 19 so I assume GNOME 3.8) which were based on simple SVG wireframe style images being combined using blender.
  • This led on to Michael Verdi from Mozilla providing a brief demostration of recording screencasts and user acceptance tests with ScreenFlow, looks like a very cool way to quickly put high quality videos together but unfortunately Mac only.
  • I did a brief introduction to and demonstration of PressGang CCMS and Press Star. I attempted to provide a contrast to Lana Brindley’s talk “Open Source in Four Easy Steps (and one slightly more difficult one)” given at the same conference in 2011 by discussing where we were at in our topic-based authoring journey then, where we are at now, and providing a quick demo creating a new content specification (topic map), creating a topic, tagging the topic and editing it. There were some gremlins (live demos huh) but overall this was well received. Lee Hunter’s talk earlier in the day about turning Drupal into a fully fledged CCMS for technical authoring shows that there is definitely interest out there in taking this next step for FOSS documentation.
  • Lee returned quickly to show off xmled, an in browser editor for DITA. It takes a single pane WYSIWYG (sort of) approach instead of the two pane view with XML editor on one side and rendering on the other.

Hopefully I didn’t miss anyone! I flew out on the Sunday evening, leaving others to begin their book sprints for the remaining two days. Shaun’s short and sweet demo of the subtitled screencasts was definitely a highlight for me but all of the presentations over the weekend were of a very high quality and provided a lot of food for thought. Will definitely be looking to attend this again in future and can highly recommend to others!

Social Media

#openhelp

Eventifier

Jun
16

Can I get some cheese with that? Open Help Day #1

This weekend I am attending the first two days of the Open Help conference in Cincinnati, Ohio. Open Help brings together leaders in open source documentation and support, as well as people from across the technical communications industry who are interested in community-based help. This year’s event was possible thanks to the contributions of Mozilla, Github, WordPress, and my own employer – Red Hat.

I haven’t had the privilege of attending the event in the previous two years but based on the accounts of those who have attendance was up this year. No doubt this is in part due the tireless efforts of organizer Shaun McCance.

Solving the Q+A conundrum with StackExchange

After a buffet breakfast, normalization of caffeine levels, and a quick welcome from Shaun we launched into the presentation phase of the conference. First up was Jorge Castro from Canonical with his presentation which covered the issues with existing user support systems that led to the creation of the askubuntu StackExchange. Jorge also provided some tips for working within the framework of StackExchange sites (and StackExchange clones, like askbot). Siobhan McKeown has provided her detailed notes from Jorge’s presentation so I am going to concentrate on my own key takeaways:

  • Whether you are cultivating it or not if your FOSS project is mildly well known then it has a presence on a StackExchange site under one tag or another.
  • StackExchange sites in particular (the clones not as much) have very high rankings in search results, treat them not as a replacement for writing good formal documentation but as an on-ramp to it.
  • When launching a StackExchange site, or one based on one of the clones, early engagement is critical.
  • The biggest complaint about StackExchange sites is that they are very sterile. The aggressive design and concentration on getting the best answers means that the sense of being a community takes a hit.

Personally this last point is an area where I see the ask.fedoraproject.org and ask.openstack.org askbot sites as really struggling. Even those users that are engaged enough to earn the required reputation points are not actively involved in using the moderation and editing facilities provided. Consensus in open discussion later in the day was that in the beginning it is desirable to encourage and cultivate a group of early adopters whose understanding of the subject matter is neither that of a beginner, nor of an expert.

Instead the best early adopters are those with an intermediate level of knowledge who both have the need to ask some questions, and the ability to answer others. Not obtaining the correct mix early on results in beginners who are frustrated that their questions never get answered, advanced user who are frustrated with not having questions to answer (or the level of the questions being asked), or worse still – both.

How Mozilla supports users all over the world

Next up was Michael Verdi from Mozilla presenting a discussion of the support they provide for users and how it has, and is, evolving. Michael’s slides are available online and again Siobhan has provided some excellent notes for those who want to dive in. In addition to giving a broad overview of what the Mozilla support organization does and some (very) interesting functionality being integrated into Firefox Michael introduced the kitsune software that powers support.mozilla.org. Some things I found particularly interesting were:

  • The knowledge base detects and reacts to the user-agent of the browser used to access it to ensure that the correct version of each article are displayed.
  • Question and Answer, StackExchange style, functionality is included (anyone noticing a pattern here?).
  • It provides metrics and, based on Michael’s slides, lots of them. I think this is information we are often struggling for in technical documentation. The number of questions we have is basically limitless but we have to be willing and able to collect the data to answer them, or at least ask the NSA to share.
  • A twitter client is included and aggregates Firefox related tweets for members of the “Army of Awesome” to respond to, if appropriate. This was something I found very topical, lately to keep up with user interests for a project I work on I have found myself checking twitter, a forum, an askbot instance, and a mailing list on an almost daily business. It seems to me there is a need to be able to select and aggregate user queries from these diverse sources to get them in front of the right people. This seems to be particularly important when the right people stick primarily to mailing lists which as mentioned earlier in the day are arguably the most convenient form of user support for developers and the least convenient for end users.

Michael finished up by providing some great examples of the way that metrics have fed into efforts to identify the most common pain points for users and the actions taken based on this information to improve the user experience.

One of the other key things I took away from Michael’s talk was that most popular FOSS projects can not scale to support every single user interactively, ideally the majority of users must be consuming information rather than being forces to actively seek it in forums etc. The key is identifying the ways to either improve the content or the software itself to ensure that this is the case. Mozilla seem to be doing a really great job of this and overall I was extremely impressed with what Michael brought to the table today.

Open support: A panel discussion

A panel discussion on open support rounded out the events of the morning. Siko Bouterse, grantmaker at the WikiMedia Foundation and Jeremy Garcia, founder of LinuxQuestions, joined the earlier presenters on the panel. Not surprisingly a lot of the discussion centered around the earlier presentations and the common themes they shared, particularly in relation to Q&A style user help. Jeremy, being the founder of arguably the biggest existing “traditional forum” covering Linux provided an interesting contrast to this discussion. In particular Jeremy cited the feeling of community and friendliness that has been fostered at LinuxQuestions as being a central part of the draw for new users.

As part of the discussion Siko provided a brief introduction to the Wikipedia Teahouse, an effort aimed at improving editor retention by providing a fun area for new users to gain a sense of community and ultimately become “better wikipedians”. At some point in the discussion stripe.com was provided as an example of what happens when a project is launched and the founders make a determined effort to focus on having the best documentation possible.

Listening to your audience

Rich Bowen, who has recently joined Red Hat as a OpenStack Community Liaison, closed out the day. Rich used his past experiences in the Apache, Perl, and PHP communities to deliver a powerful talk  on identifying, listening to, and reacting appropriately to the audience for documentation. His slides are also available online, as is another excellent set of notes written by Siobhan. I would recommend reading both, I am unlikely to do them justice here as a wide range of topics were covered, but my key takeaways were:

  • Comments on documentation often indicate a documentation issue that would otherwise go un-noted. Spam is a huge issue, and curation is important, but the general consensus in the resulting discussion seemed to be that this overhead is preferable to sending users to a system like Bugzilla and losing a huge percentage of those contributions.
  • Some times the right outcome is ultimately just to “fix stuff”. In many ways as an author you are the conduit between the users and the developers. You might have to document a workaround in the short term but where appropriate ensure the need for a better solution is tracked and highlight it to the developers.
  • Whether you like it or not a StackExchange site is the defacto documentation for your project. You can choose not to engage there but have to be comfortable from the advice that results from that decision. At the same time your formal documentation can never be what StackExchange is. The two are in many ways symbiotic, like Jorge earlier in the day Rich encouraged linking your documentation from answers on the StackExchange site and using answers on the site to identify concepts missing from the documentation.
  • Examples in documentation must be correct, fully functional, tested, useful, and annotated. More often than not examples used in documentation fail one or more of these tests.

All in all there was plenty of food for thought in both this and other talks presented today. Without a doubt though the theme throughout were that StackExchange style Q&A sites are here to stay and that metrics are increasingly playing a huge role in driving the user support choices being made by FOSS projects.

Twitter Feed

#openhelp

May
18

Is this thing on?

It’s been a while coming but I have recently moved my blog from Blogspot to this spanky new domain hosted on Red Hat’s OpenShift Platform as a Service (PaaS) solution. If you are reading this you managed to find me anyway, wicked.

Jan
11

Setting (Long) Expiry Date for IPA User Passwords

I use Identity, Policy, Audit (IPA) to provide authentication services to my oVirt and Red Hat Enterprise Virtualization environments. By default IPA not only forces passwords for all user accounts to expire at relatively frequent intervals but makes it difficult to turn this behaviour off.

Future versions of IPA are slated to make this functionality configurable on a more granular level but in the meantime here is how I configured all (existing) users in the system to have a password expiry date some time in 2037:

  • Obtain a Kerberos ticket for the administrative user.

$ kinit admin

  •  Generate an ldif file containing directives to change the krbpasswordexpiration value for each user.
  • You can use the following script to do this by changing the elements in bold to match your environment.

#!/bin/sh

USERS=`ldapsearch -Y GSSAPI -b “cn=users,cn=accounts,dc=example,dc=com” | grep -o ‘uid=[a-z]*’ | cut -f 2 -d ‘=’`

touch ./krbpasswordexpiration_all.ldif

for USER in ${USERS}; do

cat >> ./krbpasswordexpiration_all.ldif <<DELIM
dn: uid=${USER},cn=users,cn=accounts,dc=toner-ipa,dc=usersys,dc=redhat,dc=com
changetype: modify
replace: krbpasswordexpiration
krbpasswordexpiration: 20371231011529Z

DELIM

done

  • Use ldapmodify to log in as the directory manager and run the ldif file to apply the modifications. You will be prompted to enter your directory manager password to complete this step.

$ ldapmodify -x -D “cn=directory manager” -W -vv -f update_krbpasswordexpiration_all.ldif 

References:

 

Mar
13

AQEMU Now Packaged for Fedora

It has taken some time but my AQEMU package has been accepted by the Fedora Project.

    “AQEMU is a GUI to the QEMU and KVM emulators, written in Qt4. The program has a user-friendly interface for setting the majority of QEMU and KVM options.

It is an open source project started by Andrey Rijov and while a little rough around the edges a viable alternative to virt-manager, particularly for KDE Users.

The packages currently exist in the updates-testing repositories for Fedora 15, Fedora 16, and the yet to be formally released Fedora 17. To install the package on your chosen Fedora release run:

    # yum install –enablerepo=updates-testing aqemu

If you want to help speed up the process for getting AQEMU into the stable repositories be sure to test the package(s) and login to Bohdi to add karma!

Nov
24

Testing oVirt Engine on Amazon EC2

Red Hat recently launched an open source virtualization management project called oVirt. This project is based on the source code for their Red Hat Enterprise Virtualization product, including a new web administration interface that will appear in a future release.

Building and deploying the oVirt is, at the moment, quite time consuming. To give people an opportunity to quickly get an instance up and running to have a look at the new user interface I thought I would provide an Amazon Machine Image (AMI) for use on Amazon’s EC2 service.

Note that the image is for the oVirt Engine portion of the project only and consists of a very early build of the oVirt code and is not intended for anything other than testing and development use.

The image currently exists in us-east-1a region (Virginia) and identifies as ami-07438b6e and its name is oVirt Engine Appliance. When launching an instance based on the image ensure that you choose an instance type of m1.large or above to ensure enough RAM is available.

You must also use a security profile that allows access to the following ports:

  • 22
  • 8080
  • 8443

As always when using a public image on Amazon EC2 you should also take care to ensure that they are secure. Once the image is running you can view the new web administration by accessing:

     http://[MY_AWS_INSTANCE_ADDRESS]:8080/webadmin

The default user is admin with password letmein!. If you intend to leave the instance running then you must change this.

Obviously this image is not a long term solution for creating an oVirt environment with hosts attached on which you can launch virtual machines, but I thought it might assist people with seeing what all the fuss is about!

Nov
19

Network Bridging in Fedora 16 Without Disabling NetworkManager

Creating a network bridge to allow virtual machines direct access to the network, rather than using network address translation (NAT), is not a new concept. It is however a task that has become more complex since most popular Linux distributions switched to using NetworkManager for, you guessed it, network management.

NetworkManager, unlike the old network management tools, does not currently support the creation of network bridges. As a result of this oversight most articles I have seen on the web which discuss creation of network bridges on Linux recommend turning NetworkManager off. While this is indeed a valid way to handle the problem, it means that you must either manage all network interfaces using the old network management tools or switch NetworkManager on and off as needed.

Personally while I do have a need to create network bridges on a regular basis for my virtual machines I also prefer using the userland tools built on top of NetworkManager to manage my wireless connections.

To this end today I will be illustrating how to create a network bridge on a physical Ethernet interface managed by the old network service while continuing to run NetworkManager for my other connections. As usual my weapon of choice is Fedora, in this case version 16 which has just been released. Let’s get started!

Prerequisites

Before getting started make sure your existing network configuration is working by running ifconfig. In particular take note of the device name for your Ethernet device, if you have just moved to Fedora you may find it has changed from what you are used to.

$ ifconfig
p5p1      Link encap:Ethernet  HWaddr 78:84:3C:E0:C8:6D 
          inet addr:192.168.1.120  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::7a84:3cff:fee0:c86d/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:911 errors:0 dropped:0 overruns:0 frame:0
          TX packets:127 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:108021 (105.4 KiB)  TX bytes:10874 (10.6 KiB)
wlan0     Link encap:Ethernet  HWaddr 90:00:4E:C0:5A:0D
          inet addr:192.168.1.135  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::9200:4eff:fec0:5a0d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1300699 errors:0 dropped:0 overruns:0 frame:0
          TX packets:860018 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1695740220 (1.5 GiB)  TX bytes:102433188 (97.6 MiB)

From the output we can see that my onboard Ethernet card, which used to be referred to as eth0, is now referred to as p5p1. Importantly we can also see that both devices are up and working.

Stop Services

Before changing the network configuration files it is important to ensure that both the NetworkManager and network services are stopped. You must be root, or have root permissions via sudo, to perform this action.

# systemctl stop NetworkManager.service
# systemctl stop network.service

Stopping the network services can take some time. Note that usually only NetworkManager will be running, after all being able to run both at the same time is what we are out to achieve! Check that both services have actually stopped before continuing.

# systemctl status NetworkManager.service
# systemctl status network.service

The service’s current state will be listed in the ‘Active:’ field in the readout from each command.

Prepare to be Bridged

Change into the directory where the network configuration scripts live.

# cd /etc/sysconfig/network-scripts/

The configuration scripts for your network interfaces live in this folder. The script for each interface is named ifcfg-. So in my case the configuration for the wireless interface is ifcfg-wlan0 and the configuration for the physical Ethernet interface is ifcfg-p5p1.

As the wireless interface is to continue to be managed by NetworkManager no changes are required to its configuration. We do however need to make changes to the configuration of the physical Ethernet interface so that it is ready to be bridged.

Open the configuration for the physical Ethernet interface in your favourite text editor:

# vim ifcfg-p5p1

The exact contents will vary depending on your exact installation. Mine looks like this:

DEVICE=p5p1
TYPE=Ethernet
ONBOOT=”yes”
BOOTPROTO=”dhcp”
HWADDR=78:84:3C:E0:C8:6D
NM_CONTROLLED=”yes”

In particular note that the interface is brought up on boot, uses DHCP to obtain a network address, and is currently controlled my NetworkManager. The HWADDR listed is just the MAC address of the device, generally it should be left as is.

To prepare the device to be bridged we need to make two changes:

  1. Set NM_CONTROLLED to “no”, telling NetworkManager not to manage this interface.
  2. Add the line BRIDGE=”br0″ to indicate that the device is to be used by a bridge called br0.

The resultant file is as follows:

DEVICE=p5p1
TYPE=Ethernet
ONBOOT=”yes”
BOOTPROTO=”dhcp”
HWADDR=78:84:3C:E0:C8:6D
BRIDGE=br0
NM_CONTROLLED=”no”
At this point only half the configuration is complete. We now need to define the bridge itself.

Define the Bridge

Unlike the Ethernet interface configuration the configuration for the bridge will not exist yet. You will need to create it, usually the first bridge is called br0 and defined in the configuration file ifcfg-br0.

Create the file and add the following contents to it:

DEVICE=br0
TYPE=Bridge
BOOTPROTO=”dhcp”
ONBOOT=”yes”
NM_CONTROLLED=”no”

This sets up the bridge as an interface that uses DHCP to obtain a network address, starts on boot, and most importantly is not controlled by NetworkManager (not that NetworkManager knows how to control it anyway, but I digress).

Bringing it Up

Now that we’ve configured the bridge, it’s time to bring network services back up. The order in which you start the two services should not matter as the configurations explicitly say which devices should not be controlled by NetworkManager.

# systemctl start NetworkManager.service
# systemctl start network.service

If the services do not come up as expected check the output of systemctl status for the service(s) that fail(s). Other hints may also be present in /var/log/messages. One particular thing to look out for which I have encountered is SELinux issues affecting the DHCP client started by the network service.

Check ifconfig again to verify that both your wireless interface and your new bridge interface have been brought up successfully and have an IP address. Note that the physical Ethernet device will not have an IP address listed, it is instead assigned to the bridge.

Making it Stick

Once both services are running side by side it is necessary to ensure that both will start on reboot.

# systemctl enable NetworkManager.service
# systemctl enable network.service

Result

You have now successfully setup a network bridge while keeping your other network interfaces managed using NetworkManager. In particular this means you can continue to use the userland tools to manager your wireless connections while having a bridge which can be used by your Virtual Machines.

Here is the way the bridge appears in Virtual Machine Manager’s network interface view:

Nov
03

Installing OwnCloud on Openshift Express PaaS

Updated Friday May 25th to cover OwnCloud 4!

OpenShift Express is a free Platform as a Service (PaaS) solution provided by Red Hat. It allows developers to quickly and easily deploy their applications on cloud servers while Red Hat handles the management overhead.

Currently OpenShift Express supports applications created in a number of languages including PHP, Java, Ruby and Perl. As well as allowing developers to quickly and easily deploy their own applications OpenShift provides an easily accessible test bed for off the shelf open source web applications.

I am going to demonstrate quickly setting up OwnCloud. OwnCloud is a project aimed at providing users with the same abilities as many commercially backed personal clouds but with the ability to deploy it anywhere you choose.

Today our infrastructure of choice is provided by OpenShift but an OwnCloud installation can just as easily exist or be moved to a virtual private server, or the machine in your basement.

Register and Obtain Client Tools

Register With OpenShift Express, the registration page is available from  https://openshift.redhat.com/app/login. As part of the sign-up process you will also be prompted to create a key and install the client tools for OpenShift Express.

Create a Domain

If you did not do so during registration then you need to create an OpenShift domain. At the time of writing each user is permitted one domain name and five applications. The URLs for your applications will take the form:

http://.rhcloud.com/

The rhc-create-domain tool is used to create a domain, providing a name for the domain and your OpenShift login credentials:

$ rhc-create-domain -n -l

The tool prompts you for your password and, assuming it isn’t already taken, creates the domain.

Create the Application
Before you can deploy OwnCloud you must create a stub application in the format that OpenShift understands. OpenShift adds application support for programming languages, frameworks, and even databases based on ‘cartridges’.

Because OwnCloud is written in PHP we will be using the php-5.3 cartridge to create the application. Then to provide MySQL support we will also add the mysql-5.1 cartridge.

Use the rhc-create-app tool to create the application, providing a name for the application and your OpenShift login credentials. You will also need to provide the password associated with your key, created during registration, to complete application creation.

Note that by default the local copy of the application is created in your current working directory. This is where you will update and deploy your application from.

$ rhc-create-app -a -l -t php-5.3

Now use the rhc-ctl-app to add MySQL support.

$ rhc-ctl-app -a -l -e add-mysql-5.1

Be sure to take note of the Root User, Root Password, Database Name, and Connection URL of the database.

Install OwnCloud

Change into the directory that was created when you ran rhc-create-app. This directory contains a number of files and directories:

  • .openshift/
  • deplist.txt
  • libs/
  • misc/
  • php/
  • README

Check the README file for a full explanation of what each of these is for. For now we will be concentrating on deploying OwnCloud into the php/ subdirectory.

Change into the php/ subdirectory, download and extract the OwnCloud2 source tarball.
$ wget http://owncloud.org/owncloud-download-4-0-0 -O owncloud-4.0.0.tar.bz2
$ tar -xf owncloud-4.0.0.tar.bz2 –strip-components=1
$ rm owncloud-4.0.0.tar.bz2

Now our local copy is ready to deploy to the OpenShift Express servers. OpenShift Express uses git to facilitate version control and deployment. To deploy we must:

  • Add the new files to our commit, ensuring the .htaccess file is also added:
    • $ git add * .htaccess
  • Commit the new files, entering a commit message when prompted:
    • $ git commit
  • Push the commit to the remote server:
    • $ git push

Now, access your application in a web browser at the address of the form:

http://.rhcloud.com/

The OwnCloud setup wizard will appear.


Enter a Username and Password for your OwnCloud administration account. Remember that this application is running on the public internet and therefore must have a secure password.

Click Advanced and select MySQL as the storage engine. This enables a number of additional options.


These options should be set as follows:

  • The Data folder should be set to ../../data. This folder is the location of the persistent data storage for an OpenShift Express application.
  • The Database user must be set to the database username as returned when adding the MySQL cartridge.
  • The Database password must be set to the database password as returned when adding the MySQL cartridge.
  • The Database name must be set to the database name as returned when adding the MySQL cartridge.
  • The localhost value must be replaced with the appropriate host as returned when adding the MySQL cartridge. This will be in the form of an IP address, the protocol and port information can safely be discarded.

Once you are satisfied with the values entered, click Finish Setup.

Finished!
Assuming all has gone well you will be logged into your newly created OwnCloud installation running on OpenShift Express!

For some hints on what you can actually do with it, see:

http://owncloudtest.blogspot.com/2011/06/what-you-can-do-with-owncoud-today.html

Sep
08

Changing the Primary Display in GNOME 3

I recently ran into a problem with GNOME 3 and my external monitor. GNOME 3 defaulted to displaying notifications and the activities overlay on what I consider to be my secondary monitor.

Investigating the graphical display configuration tools I was unable to find an option to change this. Luckily xrandr supports the setting of the primary display and GNOME 3 appears to take heed of it. To change primary display:

  1. Run xrandr to list available displays.
  2. Run xrandr –output –primary to set the primary display.

In Fedora 15 the xrandr command is provided by the xorg-x11-server-utils package.

Update: Another viable solution has been posted in the comments. Unlike the one I have posted above it does not need to be done every time you log in. I am not sure how it performs in the event that you disconnect/reconnect the monitor or dock/undock the laptop frequently.