header image

Designing an appropriate level of security

Secure design and practices are inconsistent with easy access and convenience so each organisation must decide where they are comfortable on the security, convenience spectrum.

At one extreme data can be shielded from EMP and TEMPEST attack by a Faraday cage. Guards and gateways that include biometric authentication to access the datacentre. Once logged on you have minimum necessary rights to do your job. You would have redundant cooling and electrical power. Any outside data links are via high encryption, failover and backup are to a similar datacentre far enough away not to be caught in the same nuclear blast, tsunami, earthquake etc.

At the other extreme everything is in the cloud using SaaS, Microsoft 365 from home computers and end point device including employees phones and tablets. Add low levels of password complexity and lack of multifactor authentication and your data is accessible by any government agency or competent hacker.

Complexity

Never buy technical infrastructure that is beyond your ability to support competently. Don’t be that organisation that ‘solves’ every problem with a new product.

Might sound obvious, but I have seen way to often organisations have been sold too many complex solutions that are above the skill levels of their staff to get anything more than basic functionally out of. The extra un-configured feature set is just left as a larger attack surface for a cyber-attacker.

SAN infrastructure is complex and expensive. Do you really need a SAN? Would you be better off buying 2 unit rack mount servers and using local disk? Also, local disk access is often faster than SAN disk access that can introduce bottlenecks.

One group has no understanding on their impact on other systems.

At a certain well-known bank the VMware guys were moving virtual machines around that had been created by the Citrix PVS servers. The VMware guys were using VMware tools, the database behind the PVS server had no record of the new location. Teams didn’t communicate. Months were spent trying to fix the “Citrix problem”. Too many silos acting without the understanding of the many other complex systems, no cross training. The total system was too complex for every team member to have a conceptual understanding of the many different components.

This bank had contractors that passed all communication with the client via Customer Delivery Managers who didn’t understand the subject matter.

Over-buying on your one shot cap-ex.

So often I see rack after rack of servers running at less than 5% of CPU. A project is proposed and scoped. The proposal and business case, complete with all costing, goes to management for approval; and it is a one shot approval. If you go back and say we need some more hardware or software licenses that is considered a failure and a cost overrun. So you spec everything big, from the start. A lot of solutions are difficult to right size until you see how your users are using them.

Educate your managers that every solution will be specified on ‘just enough to do the job’ but may need more resources if the project is successful and is heavily used. A successful system will be heavily used by the staff and having to come back for more hardware resource should be seen as a sign of success.

Change your business culture so you don’t have this one shot at a cap-ex with its incentive to over specify and over order.

It is not just hardware costs but 24 x 7 power costs, air-conditioning costs, maintenance contracts, software costs for backup and antivirus, patching and upgrades, lifecycle management etc. The total cost of ownership of every server is remarkably high.

Leasing computer hardware

You have to be very careful about leasing server infrastructure often servers sill performing essential roles 7 years after installation. Leasing companies make a lot of money on “lease inertia” where are lease was intended to run for 3 years runs on for say 5 years. I have seen computer racks on 3 year leases when the useful life expectancy of the rack in the company could be 50 years.

The Cloud first solution

Cloud is not necessarily the solution to the hardware problem as I have seen systems over specified in the cloud. Cloud companies make a substantial profit on these services. Most of Amazon’s profit comes from their AWS division. While cloud companies have economies of scale, and a level of discipline in their range of offferings, they make a fat profit margin on the services they sell. Once you reach a certain size most companies will find self-hosted hardware is better value for money.

I have seen emergency migration out of the cloud due to unexpected costs. If a company is using micro-services in a serverless design there is a risk of endless loops. Programming mistakes can become VERY EXPENSIVE learning experiences.

Five tools to do the same job

I have been involved in software audits and it is not uncommon for an enterprise to have multiple tools, sometimes each with full site licenses to do the same job. It is common to see 2,000 or 3,000 different applications across an enterprise. Every new application is a risk. Every application may contain spyware, “No that is not spyware, it is telemetry so we can better server you.” Every time that software company changes ownership the next ‘upgrade’ may be spyware.

Rationalise your applications, centralise buying, and make sure every software product is Common Criteria certified. There are many software products that exist because of their ability to harvest data, not because they do what the advertised function claims.

Single vendor technology stack

I have encountered real problems with server hardware, Host Bus Adapters (HBA) drivers and firmware, SAN firmware and management software, and Hypervisor compatibility. When it comes time to do upgrades the compatibility matrix overlayed from different vendors can make life “interesting”, in the way you might find the Nine Circles of Hell interesting.

Then you have to manage license renewals that can occur at different times. They won’t be synchronised and they are designed to be financially inefficient for you.

Nutanix with VMware can offer a fabulous and flexible solution, but do you really need all the features? Have you compared Red Hat Hyperconverged Infrastructure Software to Nutanix? Have a look at: https://www.gartner.com/reviews/market/hyperconverged-infrastructure-software/compare/nutanix-vs-red_hat

Consider commodity I and 2 unit rack mount servers such as Dell servers with plenty of disk running Hyper-V (or Red Hat KVM) as a hypervisor. It can be a simple cost effective solution. Hyper-V can be free with your Microsoft licensing and installs on commodity servers. Not the full feature set of VMware, but will Hyper-V fulfil your requirements?

Look through the marketing hype and the latest bells and whistles. Most hardware is set up once, Operating Systems installed and applications installed. Then application owners don’t want to touch them until they are forced to. Many of the ‘bells-and-whistles’ with expensive products are never used.

Efforts should be made to reduce complexity at the expense of features because the more complex it is, the more likely you are to have mistakes. Try to keep the technology stack with a single vendor. All too often vendors point at the other company’s product and blame them for the problem.

A small hint on hardware, many Defence and Intelligence agencies won’t use servers manufactured in China. How can you know the full function of every chip on the motherboard?

Mark Ellis

Cyber Security Consultant

Citrix Consultant

Digital Harmony Australia Pty Ltd

email telephone number