Can open source web applications increase the ROI of your website?

Open Source web applications can respond quickly to changes in web trends and technologies, allowing the software to be widely tested and regularly updated – and all for FREE!

For those not familiar with the term open source, it describes practices in production and development that provide open access to the end product’s source materials. (Video: Stephen Fry introducing open source software)

In choosing the right technologies for your website you may have already come across some of the leading open source content management systems (CMS) such as Drupal, WordPress and Joomla. The continuing growth and success of these products is greatly attributed to the open nature of the projects, where any developer can contribute to fixing problems and build enhancements. So as well as the software being free to use, regular updates are also made available by the global community of developers to improve security and usability – allowing these products to respond quickly to changes in security and trends.

Despite these advantages of open source products, can we really trust them with our online sales? Magento is proving we can.

Magento is an enterprise level e-commerce and CMS application which has been designed with big businesses in mind – the likes of Nokia, Xerox, Adidas and Samsung have all adopted Magento at a commercial level. However despite this commercial arm, Magento has actually been built on open source technologies and is available for free, making a very powerful system affordable to SME’s.

Unlike other older e-commerce platforms Magento has been built from the ground up on modern technologies meaning it not only offers improved customer and administrative usability but it does so with more focus on SEO. The result of which is that the software is proactively helping to organise products and content in a way that search engines can lock into, helping to organically drive new visitors to your shop.

We’ve had fun developing a few sites in the latter part of 2010 on Magento and are very excited by the results. Why not check them out:

  1. http://dinkyinc.co.uk
  2. http://reefjewellery.co.uk
  3. http://loose-fit.co.uk

FLUID7 Relaunch!!

We had a great time celebrating the relaunch of FLUID7 after bringing together Web Jetty and FLUID7 as one new stronger, better company!

We invited all our clients along to the stunning 1450 bar in Coventry, where we hired the top floor for our relaunch party. Serving canapes and drinks all evening, we wanted a chance to mingle with our clients,  introduce them to the new team, thank them for their support and excite them about the opportunities ahead.

We didn’t want the evening to drag, so came up with the idea of extending a recent team photo shoot brief to our clients. When quickly updating our website with new team photos, we decided to involve some random props into pics (check them out), so what better way to break the ice and get everyone involved in the evening than to get people creating their own masterpieces from the props available. I have to say I was impressed at the effort many went to and as you can see we had no problems getting stuck in ourselves!

Jon Adjei, James Herring, Faith Martin, Matt Martin
The New FLUID7 Team!

We didn’t want everyone’s efforts to go unrewarded and feeling generous we’ve created some categories, chosen our favorites and are now excited to announce the winners (who’ll each receive free hosting for a year).

BEST POSE: Lee Rogerson (Street Talk)

Lee Rogerson: Winner of Best Pose
Lee Rogerson: Winner of Best Pose

MOST HUMOROUS: Tim Coleman (CWMC)

Tim Coleman: Winner of Most Humorous
Tim Coleman: Winner of Most Humorous

MOST THEATRICAL: Patrick McNeill

Patrick McNeill: Winner of Most Theatrical
Patrick McNeill: Winner of Most Theatrical

You can see the entire photo collection on our Facebook page. Special thanks to Alex Rideway for taking the photos for us and to Mike Bensley and his team at 1450 for looking after us throughout the evening.

We’ve certainly marked the the joining of the two companies with a memorable event and are excited to see what 2011 holds for the new team – watch this space…

Ninja tools … or debugging network problems

Wireshark network analysis tool

Wireshark network analysis toolHaving just come through a harrowing ‘network issue’ ordeal, I thought I’d best document the steps back to sanity from out of my naivety. A rough description of the scenario follows.

We have a client using Sage Line 50 who wanted to perform queries on their web members database in conjunction with their in-house accounting information. This set me down the path of setting up an on demand connection and synch, database to database. The solution was a VPN connection between the client’s LAN and their web server.

Having never played seriously with VPNs before, we secured the help of a good colleague of ours with mad IT skillz.. Tino at Forza-IT (site coming soon).
A day on site for Tino and a few days of fiddling on my part got the solution in place. This ran fine for a couple of months, but then a rare kernel upgrade forced a reboot of the web server a week ago.

The first we heard of issues was that none of the client’s machines could access their web member’s database a few days later. Tonight’s conclusion has seen me grow from a ‘poke it and see’ position to a ‘ahhh I see how that’s working’ position. Basically I’d left a failed/partial lt2p/ipsec vpn setup in place as well as the working pptp vpn, but it took a lot of investigating to finally see the light!

The ninja tools I have acquired along the way are as follow.

Network routing and traffic investigation

netstat -rn
...
netstat -an | grep LISTEN
...
route -n
...
telnet example.com 1723
...
tshark -i eth0 proto 47
...
tshark -i eth0 port 500
...
tail -f /var/log/messages

tshark is a traffic watching tool that seems to have taken over the mantle of ethereal, tcpdump, nc (netcat) and installs on CentOS with the wireshark package. It is synonymous with tethereal.

yum install wireshark

Remote syslog harvesting

vi /etc/sysconfig/syslog

Add the -r switch to the options therein

...
#SYSLOGD_OPTIONS="-m 0"
SYSLOGD_OPTIONS="-r -m 0"
...

Ensure ‘syslog 514/udp’ is listed somewhere in /etc/services
… and finally restart syslogd

service syslog restart

Once you’ve set your router to forward event logs to the syslog server IP, /var/log/messages will harvest the router logs as well as the local events. Have a look at syslog-ng if you want to get more clever with syslog.

The combination of these tools and several hours of reading got me through in the end. The failed VPN connection had setup a network route on reboot that sent any outgoing traffic from the web server to the client router into a black hole. This made investigation really difficult, as pinging/telnet-ing the web server from the lan would send, be seen on the server, but no reply returned. I faffed with the firewall an awful lot turning lots of traffic logging on and off tracking the cause down. In the end, the thing that made the penny drop was seeing traffic over port 500 (IKE) using tshark coming from the lan when I wasn’t trying to initiate a VPN connection from the web server as far as I was aware. This was the IPSEC connection that was set to start ONBOOT, had sprung to life, failed to successfully create a VPN and killed traffic between the two sites for good measure.

Well glad that’s all over, and I’m sure the client will be on Monday … bleurgh!

Cross browser HTML5 Audio and Video is a reality!

Well OK.. kind of … 🙂

I’ve been searching high and low for HTML5 implementations on video and audio. I wanted all my audio and video widgets to look the same cross browser. Although I knew HTML5 is getting great support, I also knew that HTML5 video and audio wasn’t supported by IE6, 7 and 8 and only partially supported by Opera and Firefox.. So the way to acheiving my goal looked pretty grim..

But alas! There are many clever implementations out there that are usable.. one in particular impressed me..
Continue reading “Cross browser HTML5 Audio and Video is a reality!”

Drupal – Theming to keep your modules modular

Drupal is a powerful CMS and allows us as developers to create very bespoke web sites and applications.

I tend to create a module for every website to handle its Page and Block declarations. But its messy and not to mention unconventional to include HTML in your modules. I want to share how to theme your page declarations and any other piece of HTML for that matter to keep your modules tidy.
Continue reading “Drupal – Theming to keep your modules modular”

Magento – Add custom content layouts for CMS pages

Magento

Magento

Sometimes in the main content block of your theme, the design requires to have a complicated CMS page content layout that stray from using basic linear content (Adding blocks below whatever is under the main content block). You may want various blocks to be set anywhere you want EG the homepage..

This post shows a good clean way to use the layout XML and PPH in your layout files to position blocks of content exactly where you want inside your main content block.
Continue reading “Magento – Add custom content layouts for CMS pages”

Drupal CSS aggregator

A couple of pointers when you’re getting into theming Drupal the correct way rather than just hacking around as is most fun.

I seem to hit troubles getting the aggregator feature of Drupal working, and often end up just slapping an external CSS link call in to the page template.

The proper way to do is a little long winded, but gives us the speed optimisations offered by the aggregator facility. Instead of putting <link … /> in the page.tpl.php file, use the drupal_add_css() function in your template.php file.

The best place to put it is in a function called <themename>_preprocess_page().

And here’s an example of what that function can contain…

function mytheme_preprocess_page(&$vars) {
  //JA Inject theme styles and js
  $resetcss = drupal_get_path('theme', 'mytheme') . '/yui/build/reset-fonts-grids/reset-fonts-grids.css';
  $thickboxcss = 'misc/thickbox/thickbox.css';
  $thickboxjs = 'misc/thickbox/thickbox-compressed.js';

  drupal_add_css($resetcss, 'module', 'all', 1);
  drupal_add_css($thickboxcss, 'theme', 'all', 1);
  drupal_add_js($thickboxjs, 'theme', 'header');

  $css = drupal_add_css();
  $vars['styles'] = drupal_get_css($css);
  $vars['scripts'] = drupal_get_js();
}

Some other things to watch out for .. make sure the path you provide the aggregator is relative from root but not relative to root… I’m not helping much am I!
I mean this …
misc/thickbox/thickbox.css
as oppossed to this …
/misc/thickbox/thickbox.css

Also make sure the web server has access to the files .. correct permissions etc.
I found that even pointing the aggregator at symlinks instead of the actual files was causing a problem .. probably to do with permissions on the real files.

Anyways .. hope that helps!

References:
http://api.drupal.org/api/function/drupal_get_css/6
http://api.drupal.org/api/function/drupal_add_js/6

NVIDIA and suspend issues

I’ve got the nvidia proprietary video drivers running on my Fedora laptop using the rpmfusion-nonfree yum repo.

I also installed the akmod-nvidia package as it recompiles the kernel module for the graphics each time a new kernel is installed. Super!

However I’ve been battling with power suspend failing when slamming the lid on my laptop.. it hangs and won’t power off/restart without a nasty 10 sec power button press and hold.
I think I’ve finally figured the problem. It seems the kmod-nvidia- tries to install as well.

A bit of the following and all seems well in sleep world!

yum remove kmod-nvidia-

In /etc/yum.repos.d/rpmfusion-nonfree-updates.repo add this line beneath the [rpmfusion-nonfree-updates] block

exclude=kmod-nvidia-*

Plesk and Qmail into Virtualmin and Postfix

Biggest headache!

I’ve spent hours trying to get the Maildir storage of a Postfix install working on a Virtualmin box that had been migrated from Plesk.

There a few critical steps to get everything working. There are lots of references out there, but none covered all my issues at once. All these bits might need setting or just verifying to make it all happy, and this is done on a CentOS 5 box.
Ultra critical points for me were step 1 (6th line), step 6, step 7, step 9 and the last few command line instructions (especially the 1st line).

  1. Webmin -> Webmin -> Usermin Configuration -> Usermin Module Configuration -> Read mail
    Mail storage format for Inbox = Remote IMAP server
    Sendmail mail file location = /var/spool/mail
    Qmail or MH directory location = Subdirectory under home directory
    Qmail or MH directory in home directory = Maildir
    POP3 or IMAP server name = localhost (this might need to be typed explicitly)
    Sendmail command = /usr/lib/sendmail
    Default hostname for From: address = From real hostname
    Allow editing of From: address = yes
    From: address mapping file = /etc/postfix/virtual
    Address mapping file format = Address to username(virtusertable)
  2. Webmin -> Webmin Configuration -> Webmin Modules
    Install Postfix
    Remove Sendmail
  3. Webmin -> Servers -> Postix Mail Server -> General Options
    What domain to use in outbound mail = Use hostname
    What domains to receive mail for = $myhostname, localhost.$mydomain, localhost, localhost.localdomain
    Send outgoing mail via host = Deliver directly
    Default database type = hash
    Internet hostname of this mail system = Default (provided by system)
    Local internet domain name = Default (provided by system)
    Local networks = Default (all attached networks)
  4. Webmin -> Servers -> Postix Mail Server -> Mail Aliases
    Alias databases used by the local delivery agent = hash:/etc/aliases
    Alias databases built by “newaliases” command = hash:/etc/aliases
  5. Webmin -> Servers -> Postfix Mail Server -> Virtual Domains
    Domain mapping lookup tables = hash:/etc/postfix/virtual
    Domains to perform virtual mapping for = From domain mapping tables
  6. Webmin -> Servers -> Postfix Mail Server -> Local Delivery
    Home-relative pathname of user mailbox file = Maildir/
  7. Webmin -> Servers -> Procmail Mail Filter
    Set variable DEFAULT to $HOME/Maildir/
    Set variable ORGMAIL to $HOME/Maildir/
  8. Webmin -> Networking -> Networking Configuration -> Hostname and DNS Client
    Hostname = localhost.localdomain
  9. Webmin -> Networking -> Networking Configuration -> Host Addresses
    127.0.0.1 = localhost, localhost.localdomain
    <your external IP address> = <FQDN> (eg. 80.70.60.50 = example.com)
    Then click ‘Apply Configuration’

The last few steps are best done from the root command line…

hostname  (eg. hostname mail.example.com)
mkdir -p /etc/skel/Maildir/new mkdir -p /etc/skel/Maildir/cur mkdir -p /etc/skel/Maildir/tmp
wget -c http://www.qmail.org/convert-and-create
chmod +x convert-and-create
./convert-and-create
postmap /etc/postfix/virtual
newaliases
service postfix restart

Few extra pointers ..
you need

host `hostname`

to give you the <hostname.FQDN> and external IP address (take note of the backticks, not apostrophes)…

mail.example.com has address 80.70.60.50

and looking inside /etc/hosts, you should see the 2nd line as your external IP address and just the <FQDN>…

127.0.0.0.1   localhost   localhost.localdomain
80.70.60.50   example.com

You’re looking for the results of

postconf -n

to look something like this

alias_database = hash:/etc/aliases
alias_maps = hash:/etc/aliases
broken_sasl_auth_clients = yes
command_directory = /usr/sbin
config_directory = /etc/postfix
daemon_directory = /usr/libexec/postfix
debug_peer_level = 2
home_mailbox = Maildir/
html_directory = no
inet_interfaces = all
mail_owner = postfix
mailbox_command = /usr/bin/procmail-wrapper -o -a $DOMAIN -d $LOGNAME
mailq_path = /usr/bin/mailq.postfix
manpage_directory = /usr/share/man
mydestination = $myhostname, localhost.$mydomain, localhost, localhost.localdomain
newaliases_path = /usr/bin/newaliases.postfix
queue_directory = /var/spool/postfix
readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES
sample_directory = /usr/share/doc/postfix-2.3.3/samples
sender_bcc_maps = hash:/etc/postfix/bcc
sendmail_path = /usr/sbin/sendmail.postfix
setgid_group = postdrop
smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
unknown_local_recipient_reject_code = 550
virtual_alias_maps = hash:/etc/postfix/virtual

Anyway, hope that helps .. you can wake up and leave now if you like.

References:
http://bliki.rimuhosting.com/space/knowledgebase/linux/mail/Postfix+mbox+to+Maildir+conversion

http://bliki.rimuhosting.com/space/knowledgebase/linux/mail/postfix+notes

http://www.postfix.org/DEBUG_README.html

http://www.seaglass.com/postfix/faq.html

http://www.virtualmin.com/node/11123

Is the PCI scan on your webmin revealing weak SSL ciphers?

webmin

Mine was, but the fix was pretty straight forward.

  1. In Webmin go to Webmin -> Webmin Configuration -> SSL Encryption
  2. Enter the following into the Allowed SSL Ciphers field
    ALL:!ADH:!LOW:!SSLv2:!EXP:+HIGH:+MEDIUM

    I grabbed this string from the hardened Apache SSL config provided by the excellent Atomic Secured Linux.
  3. Restart webmin and you should be good to go.
  4. You can test you were successful by following the instructions in the blog post referenced below.

References:
Disable SSLv2 in Webmin | Noodles’ Blog.

Addendum:

After a bit more use/testing of these changes, it turns out this interfered with Eclipse/Trac/Mylyn when connecting to this server/repo.

I’ve just figured out to get this 100% happy, I needed to force the SSL version to 3 rather than 2 to make them happy… and of course PCI compliance tests still pass.

SSL weak cipher fixes