Categories
Blog General IT Healthcare Software Development

The Latest from Boomcycle’s Development Team

Catching Up With Some of Boomcycle’s Best

In the past year, our team has been busy with a number of projects, both for Boomcycle and independently.

Project Manager and Lead Software Developer Jason has been working primarily on two websites:

  • Erepairables.com — Boomcycle was brought in to assist this company when their site started to encounter growth problems with their asset import system. The system imports a data feed of damaged vehicles each night and must import numerous images for each vehicle. The original script was taking more than 24 hours to complete the import of 10-20,000 images each night — it would still be running when it was supposed to start again the next day. To remedy this problem, Jason wrote a custom, multi-threaded image fetching system that could fetch many images in parallel. He further enhanced the system by implementing a cloud-based auto-scaler that automatically fires up multiple virtual servers in response to heavy workload and spins them down when the work is complete. This dynamic server allocation provides tremendous cost savings (approximately 70-80%) relative to the cost of keeping ten dedicated servers running. The system has also provided the headroom our client needs to grow their catalog and, subsequently, their business. Jason is currently putting the finishing touches on an extensive site upgrade to improve the user experience and also simplify the back-end code, which has grown quite complex as the system has grown.
  • MyPlan.com — We migrated the system (originally built in 2003) to the cloud around ten years ago and are in the midst of a redesign that will migrate the old code forward to PHP 7 and a modern framework and DBMS for the backend to server responsive page layouts suitable for modern devices. The site is data-driven and comprises nearly a hundred thousand distinct URLs and serves 2.6M registered users.

Jason is a certified tinfoil-hat wearer when it comes to security and enjoys novel programming problems and unusual challenges that require next-level skills like encryption, multiprocessing, and novel distributed processing.

Software Architect Will has been working on several projects, including:

  • A centralized single-sign on with account and entitlement provisioning that is now used by roughly 1/3rd of the cabinets in the state of Kentucky. This system allows consumers (other systems owned by various state cabinets) to define their authorization requirements (workflow for requests, approvals, denials, revocations). It is also now mandated that this system be the central authentication gateway for any new development done in state of Kentucky.
  • A system that allows exporting, transferring and importing configuration data, rules and more between instances for a massive platform (consisting of over 9000 unique pages). The platform is used in many states and countries; essentially a competitor to SAP. This tool supports multiple gigs of data over the cloud to provision customer environments wherever they are. It also has a large need to support any type of customization our clients can do with the system – and since we are an ERP platform, there are many intricate customizations.
  • Currently architecting a new security subsystem that will enable us to integrate with most other 3rd party authentication and identity systems, as well as providing more security features such as multi-factor. This all must be extensible since our customers (state/country government entities) define a lot of their own providers for such things. This is being fitted into a system that is well over 15 years old that still uses a lot of old legacy technology and did not start out in life with any real architectural guidance. It’s no small feat.

Will was the architect of Kentucky’s Health Benefits Exchange. Will’s exchange was one of a very few to be delivered on time and worked from day one. It went on to become the model system for a lot of other exchanges.

Our front-end guru Rich has been busy as well.

At Disney Interactive, Rich was part of a media platform team that built a Ruby on Rails media asset engine and content management system that today provides all assets (images, videos, audio, game metadata) for all Disney media websites and all mobile games around the world (over 10,000 Unity, Flash, and HTML5 games). Rich helped integrate the asset engine for use with the release of the Star Wars portal (StarWars.com) and LucasFilm.com, and the engine now currently serves all media for Disney.com and DisneyJunior.com.

Before that, Rich worked as the lead front end engineer on Disney’s recipes and crafts website, Spoonful.com (now called simply “Disney Family” at http://family.disney.com). It’s a site that receives over 3 million page views per day. Rich was brought on to fix some difficult performance and optimization issues and to help organize the enormously complex javascript codebase. It was his responsibility to increase code quality by bringing attention to best practices and web standards in code reviews. He also acted as a liaison between the larger back-end team and the design team. Rich vetted wireframes and other UX documents, and was ultimately responsible for ensuring fully responsive design across a wide spectrum of browsers for desktops, tablets, and smartphones. Also, Spoonful was the first website to use the highly-anticipated Pinterest API, and he built a shared javascript widget that showed popular Pinterest pins of Disney recipes and crafts across several Disney websites — the very first implementation of the Pinterest API.

When Rich worked at Linden Lab in San Francisco, he built a new registration flow for the infamous virtual world, Second Life. The flow is still in use today at https://join.secondlife.com. Rich was the lead front end engineer and helped build a slick interface (at the time) that decreased user friction and simplified a complicated signup process. He worked with a team of translators, and it’s been internationalized for 7 different languages. Over 10,000 people sign up with it every day, and about 20 million users have registered with it over the years. After that, Rich worked on HTML5 multiplayer real-time game prototypes and helped build the first web-based games that Linden Lab built aside from Second Life.

Rich understands how to architect scalable and efficient front ends and deploy them across different systems. He also has a deep understanding of automated front-end testing, front-end A/B testing, continuous integration, source control for large projects (especially git), and optimizing and caching of media assets for large-scale websites. Naturally, Rich has an excellent understanding of responsive design, device and browser compatibility, user accessibility, third-party advertising, and sensible SEO practices as it pertains to front end code.

Categories
Blog Front End Software Engineering PhoneGap React

Building iOS and Android apps with React, ES6 and PhoneGap

Front End Software Engineering in the Service of America’s Pastime

These are exciting, if not volatile, times for front end software web engineering. New libraries and frameworks appear every day. Most have a pretty short shelf life. Tens of thousands of abandoned Github projects exist only to impress potential employers. But a few really stand out as both novel and practical. One of our top front end engineers Rich Goldman describes a project he built that uses React, an open source front end framework by our friends at Facebook, which stands out as the best of its kind, at least for the foreseeable future.

In this first of two articles, I’ll give an overview of how a friend and I took an idea for a simple, monetizable smartphone app from inception to publication. I’ll explain why we chose the combination of technologies we did from the vast array of open source options that were available to us, and I’ll be sure to point out anything we found interesting along the way.

The idea itself is called “Pitch or Perch.” It’s a fantasy baseball app that assigns numeric scores to starting pitchers, which quickly lets users determine whether they should start a given pitcher or bench him. This is normally the most difficult aspect of managing a team, because there are literally hundreds of starting pitchers in Major League Baseball. Beyond their natural talent, pitchers can display a wide variance in quality from start to start. They’re prone to hot and cold streaks; they may be facing a gargantuan offense, such as the historic 2016 Red Sox; they may be up against an ace on the opposing team, which would greatly decrease their chances of getting a win; they may be pitching in a ballpark that has thin air, like at mile-high Coors Field in Colorado, where the average ERA is a full point higher than in any other stadium. These are just some of the factors that play heavily into whether a pitcher is going to hurt or help your fantasy team on a given day. It’s almost impossible to keep track of all this, so an app that uses a good data feed and a set of relatively simple algorithms can quickly help users set their pitching rotation with confidence and move on with their day.

Before you can decide on front end technologies, you need to have a good handle on what you’re trying to accomplish. In our case, we knew we wanted to build a fairly simple app that we could sell through the iTunes and Android Stores. A common misconception is that you need to have a separate iOS developer and Android developer who build out the apps using different languages and codebases. While this may be true for real-time multiplayer games, it’s often possible to write the code once using HTML5 and JavaScript and then port that to iOS and Android (and other platforms) using Adobe PhoneGap. There may be a trade-off in how smooth the overall experience is, but you can also save a huge amount of time and money with just a minimal loss of experience, assuming the interaction between the HTML5 and the platform APIs isn’t overly extensive or twitchy.

In our case, the user was primarily reading data (just vital pitcher stats and the numeric score), and so the app seemed like an excellent candidate for PhoneGap. We decided the app should just update itself overnight with the next day’s projections. In fact, we didn’t even need the user to log in, because we weren’t collecting any information from them. At some point in the future, we may allow the user to store their pitchers in our database, but with only about 30 or 35 pitchers starting on a given day, it’s not a big concern for launch date: Opening Day 2017.

One feature we believed critical was being able to see at least three days out. A common issue in leagues with daily lineups is that the user might “go dark” for a period of time — usually it’s kayaking with the family or something equally prosaic. They need to be able to set their lineups at least a few days in advance.

And this is where it helped to work with a good data feed provider. After some research, we realized there really weren’t many options for reliable MLB data. I was surprised to learn that MLB doesn’t even offer a developer API for player statistics. They collect statistics comprehensively — and you can view them as HTML on their website — but there’s no free or paid service that allows you to access that data via REST calls. We looked at a few javascript libraries for scraping HTML, but it seemed like a terribly tedious and brittle approach. Next, we looked at some sports feeds by Yahoo and fantasy startups such as Rotowire, but they didn’t have the breadth of the data we needed — which was a considerable amount. We needed every pitcher’s stats for the season up to that point. Our algorithms required every pitcher’s complete projection data from multiple sources such as Yahoo, DraftKings, and Fanduel; stadium data; and team hitting statistics. Luckily, there was one company that could provide us with an API that robust — a startup called, you guessed it, FantasyData (https://fantasydata.com). After expressing our need to see projection numbers multiple days out, they added that data immediately. Try that with a company like Yahoo!

The simplicity of the interface suggested that React could work well for us. Obviously, React isn’t the best solution for all situations. Complex enterprise products might benefit more from full-featured frameworks such as Angular or Ember. In fact, React is just the “View” in the traditional Model-View-Controller pattern. It’s meant to be lightweight and agnostic about the data behind it. React behaves very well with the latest specification for javascript, ECMAScript 6 (also called “vanilla js,” meaning you don’t need to use libraries like jquery anymore — vanilla javascript is good enough). The two go hand in hand, which makes it a good long-term choice for dev teams that want to stay ahead of the curve. As opposed to a complex framework like Angular, which has an extremely steep initial learning curve and often locks you into using older libraries, any javascript engineer can learn basic React within a week or two and be able to build practical applications quickly using the latest in ES6 techniques and best practices.

One thing that confuses a lot of people is that packages of javascript code must all work in concert to function as a complete application within a given environment. The packages you use for a project have their own versioned dependencies, and these are all managed by the Node Package Manager (NPM). Often the functionality of these tools overlap to some degree, and so you need to experiment and research quite a bit to find the best combination of tools for a given situation. A lot of this wisdom is anecdotal, picked up along the way through blog posts and confessional StackOverflow comments.
A couple of important tools we used are Webpack and Babel. Without getting into too much dry detail, these libraries allow you to transform and transpile your assets, similar to compiling a desktop app in Java or C#. React views need to be transformed into HTML and javascript. Additionally, Internet Explorer doesn’t fully support ES6 (of course), and so you need to transpile it down to ES5 if you want your app to work in IE. These processing steps are accomplished with Webpack and Babel.

Luckily, we can automate these steps using another very popular tool called Gulp. Gulp is a javascript build tool, not unlike Ant for Java or Make for the Unix platform. But since Gulp is javascript, not XML, it’s quite flexible, powerful, and easy to debug. There are literally thousands of Gulp plugins that you can import into your build chain to do a wide variety of tasks ranging from deployment to CDN to customizing your development environment. And since Gulp works with data streams in memory, you can run intensive build processes quickly — assuming you have the RAM.

In our case, we just wanted some basic post-processing functionality for our app. We wanted to minimize and obfuscate our javascript libraries; minimize our CSS; and we wanted to watch certain directories and files for changes so that we could automatically run Webpack and Babel to post-process and re-deploy. Gulp did all this for us and easily. In the future, we may use it to automagically generate image sprites and relevant CSS; optimize pitcher photos for different devices; or even create sourcemaps for our compressed assets, which would allow us to easily debug obfuscated code and CSS. Gulp is our friend.

With our basic tools configured and in place, we were finally ready to work on the app.

First, we figured out the algorithms we would use to calculate our pitcher scores. This is part of our secret sauce, but we can let on that it involves an amalgamation of projection data from different sources. We tweaked the algorithm to print out a score between 60 and 140 for each pitcher. In line with sabermetric statistics, a 100 represents an average start for a pitcher. A 140 represents a start is 40% better than an average start. Obviously, this part is very difficult to quantify, especially considering leagues have different scoring formats. What might be a great start in one fantasy league might be only mediocre in another. But these approximations are close enough for launch, and we’ll eventually allow users to input their league’s scoring format in order to provide them with more accurate predictions. We’ll be keeping a close eye on our calculations and will likely have to tweak things as time goes on to improve our predictive model.

Now on to the React app itself. React relies heavily on the idea of “components.’ A React component consists of the markup associated with a piece of the layout, plus all the javascript required to render that piece of the layout dynamically. For example, the list of the starting pitchers is a React component, so we created a Javascript Extension file called PitcherList.jsx. A Javascript Extension file is just a React view before it’s been converted into pure javascript and markup via Webpack. Think of JSX as being similar to a server-side scripting language such as PHP or ASP, except the language itself is ES6 and the script is executed on the client, not the server. Combining markup and javascript in the same file flies in the face of conventional wisdom, which for years has called for a clear separation between content, presentation, and logic. But React allows us to mingle markup with script so fluidly that it makes sense — as long as the view isn’t overly complex.

Now that we had the list of pitchers, the other major view we needed to build was the pitcher’s vital stats that would appear when you clicked on his name in the list. We called this file PitcherDetails.jsx and added the appropriate HTML5 and ES6 to build out that view. Since Gulp was configured to watch our jsx directory, it automatically picked up on the new file and did the proper post-processing and re-deployment via Webpack and Babel.

We needed one more component, called PitchOrPerch.jsx, which was simply the container for both PitcherList.jsx and PitcherDetails.jsx. In React, a container for other components is a component itself. With three fairly simple React components, our content was complete for launch. Next came the styling. We used the new CSS flexbox model — finally supported across modern browsers — to allow for responsiveness across devices, which was critical for our app. Smartphones have a huge range in resolution and pixel density, and flexbox allowed us to support all these sizes with a minimal amount of effort — and without having to include heavyweight dependencies such as Twitter’s bloated Bootstrap library.

Finally, a few too many hours were spent scanning through gigantic font foundries searching for baseball-inspired fonts, and writing arcane CSS to get the fonts to display on every browser and device.

At this point, I hope you have a better understanding of what it takes to build a simple React application from beginning to end. As I noted before, a real-world app requires not only React but a host of other tools and dependencies, as managed by Node Package Manager.

In my next post, I’ll detail the thrilling travails of converting and optimizing our app for our target platforms, iOS and Android, using Adobe PhoneGap. We’ll look at the process of testing and debugging with tools such as BrowserStack, and at getting our apps approved and published in their respective online stores. And finally, we’ll talk a little about our marketing approach and we’ll see how we actually do in the first month of launch, April 2017.

Batter up!

 

Categories
Technology Business Plans technology business topics Technology Investing

Why A Business Plan Is Critical for Growing Technology Companies

The Competitive Landscape

It is a myth that entrepreneurs, and businesses in general, no longer need a business plan to raise money. There is a dialogue line from some VC’s and Angel investors – particularly in Silicon Valley – that signals businesses, and wannabe businesses, that they only need a term sheet and a slide deck. With only these things the world will be your oyster. However, the reality is a bit different! Of course, there are technology companies receiving funding today with no business plan. But if they do “hit the lottery” – a very rare occurrence – they may find that what they have left of their company is a small pittance of ownership. How will a business plan help in retaining ownership?

Most often the people starting companies – with or without good business plans – will fail within two years. Following are some data points for a reality check:

2 Year Failure Rates of Types of Business Starts

  • 85% of venture backed businesses fail in two years
  • 65% of garage founded businesses fail in two years
  • 35% of franchise business owners (franchisees) fail in two years

To put a somewhat finer point on it: 85% of the best and brightest with lots of available cash fail in two years. 65% of two person companies with an idea fail in two years. And 65% of franchise businesses with only adequate financing, a somewhat generic business plan, good advice and mentoring, a little bit of hard work beat them all.

Businesses fail for a wide variety of reasons, but these reasons can be grouped into some handy buckets:

  • Bad structures
  • Bad advice
  • Bad ideas
  • Bad execution

If you want to read a few excellent studies about why businesses fail, one good source are the Startup Genome Reports – available here: http://blog.startupcompass.co/.

There are a lot of things that can go wrong in a new or established business, and surely some things will go wrong. If you have a business plan however, these somewhat predictable “wrong turns” may not be catastrophic. Underlying the issues, whether or not you have a business plan, is the need to identify what your underlying business model is, what assumptions you have made in establishing your business model, and how will you know if these assumptions do or do not work.

Surviving long enough to pivot – make the required changes to tune your business model – requires that none of the bad assumptions, of your 1,000s of assumptions, get so deeply engrained that they kill you before you even know it. Most businesses have all the answers and often these answers are correct – for the questions which were actually asked. The problem is that these businesses did not know what the right questions were in the first place. Often, their “advisors” didn’t know either. So these businesses go on their merry way until they crash and burn. Boom!

Why Is A Business Plan So Important?

“I was told I don’t need a business plan to raise money anymore”.

A business plan is important so that you can logically lay out the following major success components:

  1. What you need to do to prosecute your business
  2. Identify the key assumptions you are relying on to drive your business
  3. Set up good metrics – benchmarks – to see if those assumptions are holding true to form
  4. Establish a structure for your company to operate within so that as you identify weak or faulty assumptions you can make metered changes and make sure the results actually improve. Further, that the changes don’t break something else that you don’t see until it’s too late.

The Key Elements of a Business Plan

Recently, we were approached by a client that has been presented with a very rapid growth opportunity. Unlike many, this company is not a startup, and has been around in excess of 10 years. And like many, they have never had a business plan. They have beaten the odds, they have been running a good small business for more than 10 years. Why did they need a business plan? Well, they needed an investor to fund this very large opportunity, one approached them, but conditioned their look-see on the existence of a business plan.

We went into action. Step one was to gather all the information the company had, its corporate documents, sales history, contracts, org charts, policies and procedures, IP documentation, financial statements, marketing materials, etc. Next we read all this material to become familiar with the fundamentals of the business. We interviewed the founder and senior management to understand the business and personal goals, we looked at prior financing transactions, the board of directors, how the board and management operated. How they shared the responsibility of day to day operations. And a whole lot more…

They had never had a business plan – they didn’t need one! But of course, they really did.

We began to find a lot of seemingly “little” things that had been irritating and draining the business. Of course, each little thing had an explanation. But collectively they began to tell a story of where the gears in the business model did not align. In some cases they were grinding the teeth off, in some they were working against each other, in the other cases the gears spun alone and did not meet.

We then wrote the client’s first business plan, identified the assumptions, key drivers and metrics, developed the integrated business financial model and redesigned the way they actually related to and thought about their business. In the end, they no longer “didn’t need a business plan,” they wanted a business plan to help them make day-to-day decisions more effectively and allow them to run their business more easily.

In order for your business to thrive, you need to know how to look at your business, and observe how its components interact with each other, with your competition and the changes in the markets over time. You need to have a model and a method to observe change, identify invalid assumptions and be able to make the changes in a controlled way so that one change doesn’t end up breaking something else in your business.

A Good Business Plan Will More Than Pay For Itself

If you end up in that small percentage of businesses that start, survive more than two years and reach the rarified air of multimillion dollar liquidity, how will a business plan help you?  If you have a good business plan, you will:

  1. Know how much money you really need to raise, and when
  2. Know your critical control points
  3. Have good metrics and measures
  4. Be able to build achievable goals into your business plan.

Having a good business plans means you are much more likely to achieve most if not all of your goals. You will be able to better negotiate your various financings – series “A”, “B”, or “C” – and you will know how to place the financing in order for you to succeed. When you succeed with your business plan you are much less likely to be put into a “cram-down” situation – where the investors reduce your valuation and dilute your holdings. Fundamentally a good business plan will allow you to retain more ownership.

Boomcycle offers a two day deconstruction, specifically to drill all the way down into your business, whether or not you have a business plan so that we can identify all the areas where you are relying on assumptions, well-defined, ill-defined or more typically not defined at all.

Contact Boomcycle today to schedule the first step of your booming financial future!

Categories
Amazon Cloud Blog Cloud Computing Web Development Philosophy

Understanding the Cloud – Saas, Paas and IaaS

“The Cloud” is encouraging a whole slew of new acronyms, including ones with such obscure names as SaaS, PaaS and even “IaaS”. As a small to medium business owner (yes, you’re an acronym too: “SMB”), you and your staff will most likely only have direct experience with SaaS – the applications that deliver software like Microsoft Office and Google Docs.

Here’s an example of the three layers and how they work together.

SaaS (Software as a Service) – You and your employees use it as off-the-shelf application software. You may decide to use SalesForce to manage your customer database. The SalesForce software is hosted on the cloud, rather than individual hard drives on your office computers, or on the network server in your office. In order to use SalesForce, you download apps that connect with the cloud version, so you’re using SaaS.

PaaS (Platform as a Service) – You hire a software developer who uses this to create custom software. (As a small business owner, you won’t have to know the technical details about PaaS or IaaS, but we’ll give you an overview just for fun.)

Now, let’s say you decide that the off-the-shelf version of SalesForce still requires too many hours of manual data manipulation.

For example, you’re paying overtime in the accounting and sales departments because they need to use different spreadsheets every month to sort and extract data that’s specific to your company.

SalesForce also sells access to bare-bones software platform versions (Force.com and others) — PaaS — on which you can develop, build and host your own version of SalesForce, customized to your individual business needs.

That’s the point where you hire a custom software developer. The custom software development team codes the custom software on the PaaS — uses it as a testing and developing platform — and saves the time it would have taken to set up the development environment on a separate server.

There are many other PaaS providers besides Force.com. And there’s fierce competition among providers to give developers more choices of languages, frameworks and platforms as PaaS evolves, especially in the Open Source community, according to Wazi, the most current news source for Open Source development.

Honestly, the further you go into learning about PaaS and IaaS, the geekier it gets. We touch lightly upon IaaS in another blog post. But trust me — your time and energy will be better spent running your business than trying to figure out the differences between PaaS and IaaS. Their functions often overlap and  they’re moving targets — evolving even as you read this.  The developers who use them don’t even agree on their definitions.

You’re better off finding a software development team that writes brilliant code, will handle the PaaS and IaaS for you, and can communicate with you so you get the software you want — really!

Categories
Blog Outsource Software Development Web Development Outsourcing

10 Years Of Client Success

Welcome to the new Boomcycle website! It was about time for a facelift. I mean, heck, Boomcycle is over 10 years old! How fortunate we are to be in these technological “boom times” — a proud part of the greatest nation on earth. And we walk the talk: all our knowledge workers are based right here in the U.S. We support the effort to keep great jobs in America.

Our anniversary is also the perfect time to celebrate over a decade of client success stories. We’re thankful for the opportunities we’ve had, and we love the varied and interesting work our clients ask us to do. We’ve built and upgraded large database systems, created and repaired funky eCommerce websites and coaxed “just a bit more” out of musty and fragile console-based applications. And we’ve built websites and mobile apps for hundreds of businesses throughout the United States.

We hope you’ll forgive us for not updating our website more, or peppering you with constant email updates. We’re happily working on some very interesting projects and busy mapping out the future of Boomcycle.

We look forward to serving our current clients as well as our new clients!

Categories
WordPress

WordPress – Slow Loading Time?

Many companies have problems with slow WordPress websites. There are many advertised ways to tweak, tune, and cache your way into the WordPress Speed Loading Championships. We’re going to show you a couple of changes that worked wonders for our clients. Hopefully they will work for you as well!

Page Speed Before

Here is a screen shot of the Google Page Speed plugin before tweaking WordPress. It was clocking in at 47/100, which means WordPress slow loading time to visitors. There were several things going on, some of which are outside the scope of this article. There are also several ways to analyze the web site to determine where exactly the bottlenecks are, some more complicated than others. We’re going to keep it simple.

For starters, the Firebug plugin for Firefox is your friend when troubleshooting speed issues on web pages. Use the “Net” time line to quickly identify the biggest delays when loading your web page. (If you’re feeling extra frisky, check out xdebug for PHP.) In boomcycle.com’s case, there were 3 images that stuck out immediately, as they were each taking over 4 seconds to load. The browser was downloading them concurrently, so it wasn’t as big of a delay as it could have been.

On to the tweaks…

MySQL Query Cache

OK, this is an easy one. Just add the following lines to your MySQL configuration file (usually my.cnf), and then restart MySQL. Unfortunately this little tweak is probably not available unless you have root access to a dedicated or VPS server. If that is the case, it doesn’t hurt to open a support ticket and ask if they would be willing to add it globally.

query-cache-type = 1
query-cache-size = 20M

These settings tell MySQL to use up to 20 Megabytes of cache to store common queries and their result sets. If there is more memory available on your server, go ahead and crank up the setting higher, but don’t go too crazy. More info on MySQL query caching can be found here.

timthumb.php

This particular theme, Minimal, makes use of an image resizing (among other functions) script called timthumb. While this is a handy script, the version installed was fairly old and was taking over 4 seconds to read, resize, and then serve a thumbnail from a 300K image. The newest version shaved almost a full 3 seconds off the image load time due to better processing and caching.

The easiest way to tell if your theme is using TimThumb is to use the Media section of FireFox’s built-in Page Info utility. Just look for a URL with timthumb.php in it like this one:

http://boomcycle.com/wp-content/themes/Minimal/timthumb.php?src=http://boomcycle.com/wp-content/uploads/2011/01/web_development_sample_3.png&h=226&w=406&zc=1&q=90

That URL also helps you identify where the file is stored on your server. In this case, it is in wp-content/themes/Minimal, which is relative to the public html directory of the web user. Use that information to save a copy of the original and then upload the new version in its place.

gzip compression

We are constantly amazed at how many web sites out there don’t take advantage of this very useful setting. gzip compression allows the server to compress almost everything except images before sending it to the visitor, saving both time and bandwidth. Web browsers have been supporting gzip compression for years, so there’s almost never a reason not to use it.

For boomcycle.com, enabling gzip compression saved over 180k per initial page load. (After the initial view the browser has usually cached the content, unless another web server configuration setting is not enabled – more on Expires Headers in a minute.) Sure, 180KB of bandwidth savings doesn’t sound like much, but multiply that by 10,000 visitors and you’ve just saved over 1.8GB of bandwidth! That also means your web server can spend more time handling other requests instead of being tied up delivering large, single-threaded resources like Javascript (more on that subject in a minute as well.)

First, check to see if gzip compression is enabled on the web server in question. There are several ways to do this, but by far the easiest is with this web site:

http://www.whatsmyip.org/http_compression/

That compression test will also report how much there is to gain by enabling gzip compression. One thing to keep in mind is that all modern image formats are already highly compressed, so enabling gzip compression for images will actually have the opposite result.

Various speed checking plugins for Firefox will also tell you if gzip compression is enabled, but we have experienced false information due to aggressive browser caching.  There is nothing more frustrating than chasing your tail trying to troubleshoot a issue that doesn’t really exist. Instead, use a third party point of view that hasn’t cached the web site in question.

In order to enable gzip compression on your web server, it has to be compiled in and available. If your web site is on a shared hosting provider, then you’ll have to open a ticket to request enabling gzip compression globally. It’s better for you; it’s better for them; it’s better for all of their customers.

The following settings are typical gzip compression settings for Apache 2.x web servers. The settings are surrounded by an IF statement to ensure it doesn’t break your web site if for some reason the compression module is unavailable. This configuration can be placed in the main .htaccess, or in a global included configuration file.

<IfModule mod_deflate.c>
    SetOutputFilter DEFLATE
    <IfModule mod_setenvif.c>
        # Netscape 4.x has some problems
        BrowserMatch ^Mozilla/4 gzip-only-text/html
        # Netscape 4.06-4.08 have some more problems
        BrowserMatch ^Mozilla/4.0[678] no-gzip
        # MSIE masquerades as Netscape, but it is fine
        BrowserMatch bMSIE !no-gzip !gzip-only-text/html
        # Don't compress already-compressed files
        SetEnvIfNoCase Request_URI .(?:gif|jpe?g|png)$ no-gzip dont-vary
        SetEnvIfNoCase Request_URI .(?:exe|t?gz|zip|bz2|sit|rar)$ no-gzip dont-vary
        SetEnvIfNoCase Request_URI .(?:avi|mov|mp3|mp4|rm|flv|swf|mp?g)$ no-gzip dont-vary
        SetEnvIfNoCase Request_URI .pdf$ no-gzip dont-vary
        </IfModule>
    <IfModule mod_headers.c>
        #Make sure proxies don't deliver the wrong content
        Header append Vary User-Agent env=!dont-vary
    </IfModule>
</IfModule>

If the settings are put into .htaccess, the changes will take place immediately upon next page load. If they are in a main Apache include file, the service must be reloaded or restarted for the changes to take effect.

Expires Headers

Most web sites have static content that rarely or never changes. Images, style sheets, javascript, etc. This content should be delivered with appropriate expires headers so that the visitors to your site will have the content cached in their browsers instead of having to request it for every single page load. The result is less bandwidth and resources used on your server, and a much faster user experience for your visitors.

The configuration settings are similar to the gzip compression settings. They can go into the .htaccess file, or a globally included configuration file:

<IfModule mod_expires.c>
    # Remove ETags if mod_expires is controlling caching
    Header unset ETag
    FileETag None

    ExpiresActive on
    ExpiresByType image/jpg "access plus 60 days"
    ExpiresByType image/png "access plus 60 days"
    ExpiresByType image/gif "access plus 60 days"
    ExpiresByType image/jpeg "access plus 60 days"

    ExpiresByType text/css "access plus 1 days"

    ExpiresByType image/x-icon "access plus 1 month"

    ExpiresByType application/pdf "access plus 1 month"
    ExpiresByType audio/x-wav "access plus 1 month"
    ExpiresByType audio/mpeg "access plus 1 month"
    ExpiresByType video/mpeg "access plus 1 month"
    ExpiresByType video/mp4 "access plus 1 month"
    ExpiresByType video/quicktime "access plus 1 month"
    ExpiresByType video/x-ms-wmv "access plus 1 month"
    ExpiresByType application/x-shockwave-flash "access 1 month"

    ExpiresByType text/javascript "access plus 1 week"
    ExpiresByType application/x-javascript "access plus 1 week"
    ExpiresByType application/javascript "access plus 1 week"
</IfModule>

If you’re curious what the ETags setting is for, feel free to Google it. In our experience ETags is not a big deal one way or another, but it always shows up in the speed test results if it is not disabled.

The other settings are perhaps self-explanatory. Adjust them according to your web site usage and update habits. If you frequently alter flash content, then just change the application/x-shockwave-flash expires setting to “access plus 1 day” instead of 1 month. If you have a WordPress blog where existing content never changes, then it’s probably safe to crank most of the settings to several months instead.

WordPress Plugins – DB Cache Reloaded and Hyper Cache

There are plenty of WordPress caching plugins to choose from out there, so how do you decide which one is right for your web site? The answer is by a lot of trial and error, or a lot of reading comments regarding other people’s experience.

We’ve tried many different caching plugins, and by far the most impressive is a combination of DB Cache Reloaded and Hyper Cache. They work wonders with out-of-the-box configuration, and should not interfere with anything WordPress is trying to do. Having said that, they might not be compatible with other caching plugins, so it’s best to delete the old cache stores and then disable the other plugins before activating DB Cache Reloaded and Hyper Cache.

Page Speed After Tweaks

After all the above tweaks were implemented, the increase in speed was significant, both on paper (66/100)  and most importantly, in user experience (sub-two second load times).

NOTE: When testing with Page Speed, be sure to hold down the SHIFT or CTRL key while pressing F5 in your browser. This will force the browser to request all of the content from the web server, regardless of what is in the browser’s cache. Do this a couple of times to ensure the caching has been activated in the WordPress plugins, and then run the Page Speed test. The result should be obvious in Firebug. For example – once TimThumb has cached the thumbnails, the image load times dropped from over 4 seconds to 500 – 900 milliseconds.

Interestingly, the Firefox version of the Page Speed plugin reported an even better score than the Chrome version – 79/100. That’s on par with some of the biggest and most popular web sites on the internet today.

 

The only way to get a much better rating is to combine JavaScript and convert everything to static content.

Other WordPress Speed Tweaks

You might have noticed that Page Speed mentions Javascript in several places in all the reports. There is a good reason for this. When the web browser requests Javascript content, it does so one at a time, essentially blocking all other requests for content until the Javascript is downloaded completely. (The reason for this seemingly odd behavior is beyond the scope of this post. Google it if you’re curious.) The impact in this case is  a 66/100 Page Speed instead of a much higher result, since the browser had to request 6 different Javascript files.

There is only one way to resolve this problem, and it can cause serious issues and upgrade headaches in the future. We do not recommend implementing the change unless you really know what you are doing. The resolution is to combine all javascript sources into one large file, and then update the source html/php/template so that it only references one file. If your WordPress theme makes proper use of headers and footers, this might actually be a fairly easy change to make. Upgrading the theme or making other WordPress changes could easily break what you’ve changed, however.

The other recommended change relating to Javascript is to move all the Javascript to the end of the page instead of at the top. The result is that all the visual content is loaded before the Javascript loads, making it appear that the page loaded much faster than it really did. This is also not a change to take lightly, as it could very easily break your web site.

By far the best method of speeding up a WordPress web site is by using a plugin that pre-caches all of the content, and then serves it as static html files instead of dynamic content. The result is blazing Page Speed times in the low to high 90’s. However, this particular method of caching will not work for a majority of WordPress blogs due to the liberal use of dynamic content – ads, tag words, comments, etc. Usually if the WordPress site is busy enough to warrant this type of pre-caching, the visitors will be expecting constant updates, defeating the whole purpose of pre-caching to begin with (and if the site is that large and busy, then it’s probably time to invest in some dedicated servers with all of that ad revenue coming in!)

We hope you’ve found this information helpful for speeding up your WordPress website. If you still need faster website speed, Boomcycle would love to help!

 

Categories
Blog IT Security

Is My Server Secure?

If you are a sysadmin or if configuring Linux servers is part of your job description, you’ve probably been asked to set up all kinds of software packages under tight deadlines and cost constraints.  After you installed mail, DNS, MySQL, Apache, and your application stack was up and running, you might have briefly paused to wonder, “is my server secure?” It’s a very important question.  Let us put aside for a moment the acrid debate of whether open source software is more or less safe than closed source software and heed famous security expert Bruce Schneier who argues that security is not something you can buy, it is something your must get for yourself.

If you’ve ever received a phone call from someone wailing that the “server is down” and they are “losing money by the hour” then you probably logged into the poor machine and restarted a few services (or the entire machine) and then started sniffing around the log files to see what happened. It is during this sort of rescue operation that one really starts to wonder about the security of a given machine. It’s quite disconcerting to see endless brute force login attempts in the sshd log.  It’s infuriating to see SQL injection attempts in an Apache log. Almost always, an organization just wants to get the machine running again with a minimum of expenditure.  Your instructions are to just get it running again.

It is entirely understandable that organizations want to avoid a real security audit in order to save money, but — make no mistake — saving money in this way inherently involves risk, especially if your servers process financial transactions or other sensitive data. Consider the damage dealt to Sony’s reputation by the Playstation Network hack in Spring 2011.  Consider also the breach of the Dutch certificate authority Diginotar that came to light in summer 2011. These large institutions have suffered staggering blows to their reputation and credibility because of their failure to guard privacy entrusted to them.

To secure your Linux servers, there are a few very important and fairly easy things you can do to keep the bad guys out:

Firewalls

Close every port that doesn’t need to be open and disable every service you do not need that might open a port. Limit administrative access (e.g., SSH access on port 22) to networks that belong to you. This iota of prevention is worth megatons of cure. The iptables command is an amazingly powerful tool that can lock your server up tighter than Fort Knox.  Amazon’s EC2 service offers Security Groups.  Make use of these if you have them at your disposal.

Secure Authentication

Disable direct login as root via SSH.  It’s in the sshd configuration.  Go even further, disable password login and require key pair authentication.  If you must use passwords, make sure you pick good ones.  It’s not as easy as you think. Also, when users must authenticate themselves, make absolutely certain that the passwords are encrypted in transit. These means using Secure FTP and not plain old FTP.  It also means requiring SSL/TSL for mail connections.

Autoban

Simple packages like fail2ban allow you to create ‘jails’ which monitor logs so that, using simple rulesets, you can temporarily ban any remote address which is repeatedly failing to login or which is requesting nonexistent pages, etc. If you must have ports open, fail2ban helps you protect them.

Update Your System Regularly

When security patches are released, your need to update your software to take advantage of the patches.  The longer you wait, the longer you might have a hole in your system.  Unfortunately, this usually requires some human interaction.  If you automatically update your servers, the security patch may break something in your system.  Build some time in your budget to apply regular updates — but test them first!

Integrity Checking

A file integrity checker such as samhain can provide tremendous peace of mind.  It’s an automated daemon process that calculates a hash on your precious binaries and critical folders and lets you know if they ever change.  If someone breaks in, samhain will let you know soon after.  If you are worried about someone halting samhain before it gets a chance to inform you, you can go the extra mile and conceal samhain’s operation using generic process names and steganography.

Good Coding Practices

At the end of the day, your application looks like swiss cheese compared to the core applications that support it.  Linux, Apache, MySQL, etc. are all grizzled, hardened veterans compared to the script kids you hired on Craiglist to write your email form.  Choose your developers carefully.  Make sure you check out the Top 25 Coding Mistakes.  Your coders should know what these mistakes are and how to avoid them. If you are considering hiring a developer, quiz them on this.  A good coder will know a thing or two.

 

Categories
Blog Mobile

Intro to Mobile Apps

Mobile App Development for the Non-Techie

Most business people know they need a mobile presence but many don’t know where to start or even what an “app” truly is. Technology is moving quickly and as usual the business owner whose core business is not technology-centric is left behind in the ongoing and ever-changing technology conversation. Recently that conversation has centered around mobile devices. Mobile is an exploding field, with new devices seemingly entering the market every day and an ever expanding user base demanding more convenience. If the business does not enjoy a mobile presence, the marketplace may leave them behind.

Here at Boomcycle, we hear similar questions again and again:

  • What is an “app”?
  • What are the various mobile platforms I should care about?
  • What types of apps can be built?
  • What does an app cost to build?
  • How does an app get into in the iTunes App Store?
  • Can I build one app that works on all devices?

So let us address the “mobile basics” so that you have a better idea of what you need and how to most effectively implement your businesses mobile presence.

What Is An App?

Some would say that Apple did the world a service by using television ads to make people aware of programs that ran on mobile devices, specially the iPod, iPad and most popularly, the iPhone. The word “app” is short for “application”, meaning a program that performs some function, written in a computer programming language. Prior to the introduction of the word “app” into the common vernacular, we often noticed an immediate glazing of the eyes whenever we used the word “application” in front of many clients. So thank you, Apple: the word “app” is now commonly understood to mean “little programs that run on iPhones”. Such a definition, while far too vague and far too broad, at least allows “techies” and “civilians” to start a conversation!

And while Apple does not have a monopoly on mobile devices, they do currently seem to enjoy a monopoly on mobile mind-share. When most people think of apps, they think “i-something” first (iPhone, iPod, iPad) and every other platform (if they are even aware there are other platforms) second. This is changing rapidly, though as Apple popularized the app first, Apple iOS apps are usually considered first by our clients looking to go mobile.

What Are The Most Popular Mobile Devices?

You’ve no doubt heard of at least some other mobile platforms. The biggest mobile platforms at the time of this writing are:

  • iPhone, iPod
  • iPad
  • Android
  • Blackberry

If you are thinking about “going mobile” with your website or building a totally custom app, these are the main devices on which you should focus. Naturally as time marches on, newer devices are bound to displace or obviate the current list.

Why did we separate the iPhone/iPod from the iPad? The size of the display. The larger sized iPad enables comfortable use of different kinds of apps. Reading apps like Kindle or iBooks are the most popular for these larger devices. While some people considered the iPad a ho-hum usability improvement, they fail to see the tectonic shift happening in sales, medicine and electronic publishing, to name but a few.

What Types of Apps Can Be Built?

Here’s where the definitions get a tad foggy, but bear with us, we’ll do our best to make it clear!

The Mobile Friendly Website

Your website has been around for years, and you’ve probably tried to pull it up on your mobile device’s web browser. Remember what an annoying experience that was? The text is too small, the menus are a mess and the website is generally unusable. So you never look at it that way again. What you need is to “re-purpose” your existing “traditional” website as a Mobile Friendly or Mobile Compatible website.

Building a mobile friendly website is the easiest way to “go mobile” and retain the eyeballs you’ve worked so hard to get on your traditional website. You will need to do a few things to create a mobile website:

  • Ensure that your current website is powered by a Content Management System (CMS) like WordPress
  • Evaluate your current site and make a list of the most important factors for mobile display
  • Create screen mock-ups and get feedback from stakeholders, clients and customers
  • Build the mobile version of your website based on the mock-ups

Your CMS will provide data to your traditional website and your mobile website, insuring consistent content on both websites.

Native App

These are what most people are referring to when they say “app”. This is an app, typically downloaded from the mobile platform’s store (EX: an app from the Apple’s iTunes App Store) and running “natively” — on the phone’s hardware. No connection to a wireless network is necessarily required to run a native app. Native apps can be ebook readers, games, calculators or any number of useful programs that don’t necessarily require a connection to the internet. The look-and-feel of an iPhone app is fairly well established thanks to Apple’s user interface conventions. Other platforms are more akin to the wild west, design-wise.

The big problem with “native apps” are that these apps are native to the devices on which they run. So if you want to develop native apps for mobile devices, you must select which mobile platforms on which you wish to make your app available. Most companies that want to develop a native app start with the iPhone/iPod/iPad and Android, though many also choose to develop on Blackberry simultaneously. The former are definitely the Big Three when it comes to mobile development currently.

Web App

A “web app” is a special type of app that looks like a Native App except that a web app runs on your mobile device’s web browser (for example, on the iPhone, a web app runs in Safari, the built-in web browser). A web app feels like a native app to the user, except that it relies on a data connection to work. For example, a web app that uses GPS needs a data connection to determine the location of the mobile device. A simple example of this type of web app is the Google Maps application that comes standard on Apple mobile devices.

What Does an App Cost to Build?

If this is not the first question we hear, it is a close second. We wish there was a simple answer to this question! Programming for mobile devices is still programming: it can be complex and it requires education and experience in order to do well. The complexity of any program is the key cost factor. The greater the complexity, the greater the number of hours to design and develop the app.

The other driver in the mobile equation is supply vs. demand. There are a limited number of good mobile developers and an exploding demand for the development of new mobile apps. Mobile development is currently more expensive than traditional software development.

One of the most important considerations in mobile development is the location of the data that the app uses: where does the data for the app come from?

Complexity may be found in surprising places. For example, to “mobil-ify” a standard website, one of the first things to consider is whether or not the website uses a CMS as its data source. If the site is “plain vanilla” HTML/CSS or Flash-based, the information from the website may not be usable by the mobile friendly website. In this case, the first step of the mobile friendly website initiative may be converting the current website so it uses a CMS (Project #1). The second step is then building a mobile friendly website that uses the data from the CMS.

If your app’s data can be consumed freely from the web (e.g., Google, Craigslist, eBay, etc.) your app can pull the data and with a bit of user customization such as a saved search (e.g., “I want to look at houses for sale in Santa Monica for $750K”) — bang! You have a useful app because freely available and current data is present.

However if your data is custom and requires effort to maintain — such as your own company’s data (products in stock, prices for software, the number of locations to serve your customers) then this data must be maintained by back-office systems. These back-office systems may exist already, in which case the effort may be in connecting those systems to the outside world so that a mobile device may use the data. And if these back-office systems must be built to support a mobile app, developing said systems will add to the cost of developing the “app”. In this latter scenario, the app itself may be thought of as simply the display of the data that’s maintained through more traditional systems.

Beware of mobile app developers who give you quotes for apps that have yet to be designed and formally specified through consultation with a competent programming consultancy.

How Does an App Get In the App Store?

A detailed explanation of why an app may or may not get into Apple’s iTunes Store is beyond the scope of this article. Apple may not approve an application for obvious reasons (buggy or poorly-performing) or reasons that may be far less “obvious”. The iPhone Developer Program License Agreement is a lengthy document which attempts to codify these issues. Suffice it to say, getting your app in iTunes is not a slam-dunk and may involve some time and effort on the part of your business and your consultant.

The challenging Apple approval queue is one reason why a mobile website (which doesn’t require approval) may be a good first step for businesses looking to dip their toes into mobile development.

Can I Build One App That Works On All Devices?

Strictly speaking, an app that is “native” to Device “A” will not run on Device “B”. For example, a native iPhone app will not run on an Android device, or any other manufacturer’s device. This means if you want to develop native apps, you will need to develop an app for each device you wish to target. Again, the most popular targets these days are iPhone/iPod and iPad, Android and Blackberry, in that order.

Web apps and mobile websites entail less deployment risk and can generally run on multiple mobile browsers. Your development team must be certain to check the app against the most popular mobile browsers, just like with traditional web applications on standard desktop computer web browsers.

Go Mobile!

Mobile is exploding and those businesses who go mobile first will be the first to capture the eyeballs and business of the users of these devices. If your website is not mobile-friendly, this is an obvious place to start your mobile initiative.

Ideas for native and web apps are everywhere and one need only survey the iTunes App Store or Android Market to get a plethora of ideas.

So let your imagination run wild: you may create that killer mobile app that your competitor doesn’t have! Do so, and you’ll be tapping into a whole new generation of clients and consumers of your products and services.

Categories
Amazon Cloud Blog Cloud Computing MySQL Development

Leveraging Amazon RDS to maximize database performance

Amazon Relational Database Service (RDS) is a feature of Amazon’s cloud computing platform that allows customers to establish an installation of the MySQL Relational Database Management System (RDBMS) that is running “in the cloud.” This installation of MySQL functions exactly like MySQL installed on conventional servers–existing software requires no changes and new software requires no special design considerations. However, unlike conventional servers, Amazon RDS has tremendous advantages in terms of scaling and fault tolerance that are of tremendous value.

Amazon RDS Scaling

Scaling refers to the ability to increase the capacity of the database server to meet customer demand. As customer demand increases, database servers may be unable to keep up with this demand. They will become slower and slower as their capacity to serve requests is exceeded and eventually will begin dropping requests and causing errors. Amazon RDS provides cloud-based tools for scaling the database to head off this trouble in the form of both horizontal and vertical scaling.

Horizontal Scaling

Horizontal scaling refers to increasing the capacity of the database by adding more machines. Amazon RDS facilitates this by providing a simple way to implement database replication. With database replication, an installation of Amazon RDS serves as the “master” and one or more instances of Amazon RDS are set up as separate and distinct “read replicas.” All updates to the database take place on the single master instances, and those updates propagate out to the read replicas. Because the typical choke point with database-driven web applications is in many simultaneous attempts to read the database, this provides the value benefit of sharing the load of retrieving the data among multiple servers. Amazon RDS supports establishing as many read replicas as necessary.

One drawback with database replication is that it is asynchronous. Because updates to the database must propagate out from the master to the read replicas, and because this is not instantaneous, the data on the read replicas may differ from the data on the master. Special consideration is required in the design of a software application where this type of scaling is utilized to prevent data inconsistency errors.

Another type of horizontal scaling of database is called “sharding”. Sharding is the act of splitting up pieces of the database to separate servers so that a single server doesn’t have to bear the entire load. A database scaled in this manner does not suffer the disadvantages of being asynchronous, but sharding requires very special design considerations, both in terms of database and software design. For extra scaling power, this may be employed in tandem with database replication, where each piece of the database is being read by its own set of read replicas.

Vertical Scaling

Vertical scaling refers to increasing the database capacity without adding more machines. This usually means faster processors, more RAM, bigger disks, faster network connections, and so on. Because the instance of Amazon RDS is running on the cloud, increasing the amount of RAM, number of processor cores, or disk space is a simple matter and may be done “on-the-fly.”

Fault Tolerance

Amazon RDS has an option called a Multiple Availability Zone, or Multi-AZ deployment. This means that instead of a single server instance, an additional server instance with identical configuration is established at a geographically distinct location. If the primary server is ever unreachable (network outages or hardware failures), this second server immediately takes over in a completely transparent manner. The second server’s data is “synchronous” with the primary server, meaning the databases are always identical. This provides an effective backup of the database and goes a very long way towards ensuring the database is always available.

Recent Amazon Outage

In April of 2011, Amazon suffered a much-publicized outage of aspects of its cloud computing platform, illustrating quite dramatically that even their system is not perfect. At a high level, the outage occurred in multiple availability zones in their “US-EAST-1” region. This caused some high profile web sites to be unavailable and some actual permanent data loss. Although outages of this nature are rare, they are a risk that must be accounted for. Risks of this type may be mitigated by ensuring availability zones are in distinct broad geographic regions and having a backup plan for the database that is independent of Amazon RDS.

Combining Scaling with Fault Tolerance

Scaling and fault tolerance may be combined to maximize the availability of databases to clients. In an environment with a single master and multiple read replicas, the master may be configured as a Multi-AZ deployment. In the event the master is unavailable, its invisible backup will immediately take over its duties and changes will automatically propagate from it to the replicas. It is also possible to configure the read replicas as Multi-AZ deployments, though the advantage of doing so is not as obvious. Multiple read replicas serve as available backups of each other, though they may not be located in separate geographic locations (which increases risk).

Finding the right solution

Boomcycle employs benchmarking and load testing of the Amazon RDS instances using real data scenarios from the application to determine capacity. These data are balanced with the requirements and budget of the client to arrive at an ideal configuration to meet clients’ needs.

Categories
Blog MSSQL Development programmers SQL Development Web Development Outsourcing

Astonishing True Tales of MS SQL Development!

It’s one thing to claim competence in a particular technology and quite another to demonstrate competence. Thus we at Boomcycle like to relate an “astonishing true tale” of Microsoft SQL query optimization in the real world: a client who had a problem with Microsoft SQL Server 2008 that the Boomcycle team successfully resolved.

The purpose of this work was to resolve an issue with the slow performance of an MS SQL query. Our client had an ASP.NET web site  using an MS SQL database as the data source. One of website pages displays a list of orders in a grid view. This is an extremely common use of MSSQL in ASP.NET. In the client’s web application, a fixed number of orders can be displayed at the same time on the order page. In order to see the next set of records, the user is required to click a page number at the bottom of the page.

Unfortunately the query that extracts a set of records from the database required more than one minute to run. This exquisitely torturous wait led to near total abandonment of this tool by the client’s staff.

The problematic query was a fairly complex query with several sub-queries and multiple table joins. To resolve the issue our Boomcycle MSSQL engineers decided to split the query into several parts, analyze the execution plan for each part with built-in MS SQL Server Management Studio tool, and identify the part(s) of the query that was causing the delay and the reason for this delay.

While using the Server Management Studio Tool, our team performed a series of queries to the problematic query composite tables using different join criteria. The execution plans for these queries were analyzed to see what indexes are engaged.

Happy accidents sometimes happen, even in complex MSSQL query optimization: after a bit of study, Boomcycle engineers discovered that the original problematic query began to run much faster: 6-8 seconds instead of more than one minute. It turned out that running each sub-query in the graphical query analyzer tool created statistics which allowed the SQL Server to build a cost-efficient query execution plan for the original query.

Nevertheless after getting this result our client was anxious to make the query run even faster, and thus our team continued its query optimization investigation. Ultimately we discovered that the main performance problem with the old query was that it was written in a way which did not allow the MSSQL Server to take advantage of parallel query execution. The parallel query execution feature can significantly increase performance on multiprocessors systems. Specifically, the problematic query used the SQL UNION operator which removed parallel execution in case of simple join of two composite queries. Boomcycle MSSQL experts re-wrote the query to ensure that multiprocessor parallel execution would be engaged when it ran.

In addition to the parallel query execution fix, the problematic query was cleansed of useless sub-queries. The original query used an external view that performed superfluous data operations. The view was replaced with the tables that were really required.

As result of Boomcycle’s optimization efforts, query execution time was decreased to 3-4 seconds to output all rows and decreased to 1-2 seconds to search a record with specific criteria.

Boomcycle’s client was very happy, and his staff began to use the once-abandoned system again!

Even after all that was done there were additional avenues to explore for further optimization by adding new indexes for the most often used query criteria. However our client was content with the execution speed and decided his problem had been solved.

Do you have any MSSQL problems that Boomcycle’s team can solve?

Contact Boomcycle

The most efficient way to begin a discussion of your needs is through our contact form below. Providing details about your requirements allows us to help you most quickly.
  • Please describe your project in as much detail as possible.
  • Let us know when you need your project completed. If this is ongoing work or you don't know leave this field blank.
  • Anything else you'd like to tell us!
  • This field is for validation purposes and should be left unchanged.