Competitive Overclocking: The GIGABYTE OC Lab and HWBot.org
Every field has enthusiasts – the users that obsess over the smallest details to get the best, to be the best, and excel in their field. For technology the most recognizable fields are in audio and video, where enthusiasts will spend upwards of a hundred thousand monies to kit out their home studio/cinema with the correct equipment to get the best experience. There is also an element of competitiveness between enthusiasts to own or create the best. This could be extended to PC chassis modifications, which can get rather elaborate and end up competing for prize money. Well it turns out you can be a PC enthusiast as well, where the only thing matters is speed. This is the art of the competitive overclocker.
Overclocking has been a part of reviews at AnandTech almost since the beginning – the ability to run the hardware faster than rated for an extra level of performance in day-to-day tasks, gaming, or high throughput tasks. In recent CPU generations it was a common time to find a user who had taken their 2.4 GHz processor and overclocked it to 3.2 GHz processor, and thus getting the performance of the higher class component for a large monetary saving. In the recent generation of Intel processors, the –K skews are designed specifically for overclocking, and AMD processors are always welcome for a few extra MHz.
Competitive overclockers take the whole game a stage further. For this group of people, no speed is fast enough. They are not after a 24/7 stable result, it has to be quick enough to run a benchmark (to verify the speed). As a result the equipment used can go way beyond the standard PC hardware – examining data sheets to modify the circuit boards to deliver extra power is not an uncommon sight in the higher echelons of the community. If this allows them to squeeze an extra 0.01% over someone else, then it will be done.
Because this community of performance nuts is encouraging how and where some regular users are spending their money on new system setups, component manufacturers are jumping on board to produce hardware that fits the purpose. One of those companies is GIGABYTE, who over recent generations has produced a range of ‘OC’ motherboards, such as the current Z87X-OC, Z87X-OC Force, and previous generation products such as Z77X-UP7 and X58-OC. These motherboards are designed for the extreme overclockers to go wild and allow them to push the hardware further and faster than ever before.
Gigabyte is so committed to this venture that they have opened up the ‘Gigabyte OC Lab’ in their HQ in Taiwan. It is manned by their in-house overclocker HiCookie, former World #1 and an instantly recognizable name within the community. HiCookie is responsible for the features of the Z87 OC boards from Gigabyte this time around, and the OC Lab is his den to invite extreme overclockers to Gigabyte to break world records.
After Computex 2013, Gigabyte held an OC Event manned by HiCookie for over a dozen overclockers. I got a chance to cover the event, as well as interview Pieter-Jan Plaisier, the Project Manager for HWBot, the main website where overclocking records are verified and collated.
Start at the Beginning: Principles of Overclocking
In order for completeness, I am going to start at the very basics. Overclocking in its simplest term is making a component of the PC run faster than what the component is rated for. This concept predominantly applies to the CPU, the GPU and the memory, though additional elements of the system (the chipset, the uncore) can also be overclocked.
Typically overclocking a component would void the warranty. This is despite the fact that Intel sells processors specifically for overclocking (the –K SKUs like the i7-4770K and the i7-4670K) or GPU manufacturers provide software to help users overclock components. By overclocking you are accepting the risk if the component breaks outside of its normal usage – if a system integrator sells a pre-overclocked system, they are accepting the risk above the component manufacturers and your primary warranty remains with them.
When a component is pushed beyond its specified speed, it typically requires more energy in order to achieve that speed and remain stable. This is done by increasing the voltage, or changing the power signal to improve stability (load line calibration). Increasing both the operating speed and the operating voltage increases the power usage, and the heat generated by the component. Increasing the speed higher requires more voltage, and thus more efficient cooling methods need to be used. This is why pre-overclocked GPUs often come with large coolers attached, such as the Gigabyte Windforce series.
In the world of CPUs, the Intel and AMD stock coolers are designed to be sufficient with a non-overclocked system. In order to remain within thermal stability in an overclocked mode, users will turn to bigger and more efficient air coolers, ranging from $ 10-$ 100 for the most extreme. Typically the expensive ones will be made of copper or a copper/aluminum mix, and involve fans to help move the heat away from the source as quickly as possible. On the high end of the air-cooler price range we have closed-loop liquid coolers (CLCs) that use liquids to move heat away from the source, and then pass the liquid through a radiator with a large surface area to remove the energy. For daily systems, enthusiasts will invest in water cooling, and use high pressure pumps to quickly move de-ionized water around a water loop. A good water cooling system can take a system that runs at a high 90C on the stock cooler down to below 40C, giving more headroom for overclocking.
Very few enthusiasts go beyond water cooling for a daily system, as beyond this lies sub-zero cooling, used by the extreme overclockers most often in competing for the best scores.
There are a few different methods of sub-zero cooling.
- Phase Change: Similar to a refrigeration unit, a phase change uses a gas and a compressor to cool down a copper mass or ‘head’. The head is placed on the component needed to be cooled, and unlike a fridge, the system will stay on as long as needed. Phase change units are often tuned to a certain loading, such as a rate of heat transfer at the head at which they can handle. Beyond this limit the phase change cannot be used. When one refrigeration loop is used the unit is often called a ‘single stage’, and when multiple units are used in series (i.e. a first loop to cool down a second loop which is attached to the CPU) this can be referred to as a ‘cascade’. Single Stage units are usually tuned to around -40C to -55C (depending on the refrigerant), whereas multiple stage cascades (3+ stage) might go as low as -110C or more. These units often require a large power draw of their own, such as 1600W for a cascade.
- Dry Ice/Liquid Nitrogen: By using a substance that transforms into a gas when heated at low temperatures, a component can be cooled by a constant supply. Most competitive overclockers in live competitions will be using liquid nitrogen (LN2, liquid to gas at -186C), however as a stepping stone dry ice (DICE, solid to gas at -78C) can be used. In order to draw the heat away from the component, an overclocker will attach a pot – a large copper mass to it. The design of the pot is often very important, and will determine the characteristics of the how to keep the component cool – if the pot is too light, the temperature will vary wildly, whereas if the pot is heavy it will keep the temperature regular but require a lot of LN2/DICE to reach the low temperatures (which can cost money).
Overclocking at sub-zero temperatures has many issues associated with it. First is moisture condensing in the air and then onto the motherboard or components – this is not a good situation! The system is typically insulated to prevent moisture build up on components, but also to control the ‘spread’ of the cold (no matter how anti-physics it sounds, it is the term used) to other components.
Many overclockers will use additional safeguards such as water repellant coatings on the motherboard to stop moisture causing damage, as well as use Vaseline or artist’s eraser along with plenty of moisture absorption towels to minimize damage.
This is a very large field with plenty of experts on board preparation on the big overclocking based forums that offer comprehensive guides to users interested in sub-zero overclocking.
Another set of issues with overclocking at sub-zero rears its head by a few names:
- Cold Boot Bug (CBB): Below this temperature, a component will not even boot up. The component has to be allowed to warm up (either naturally or aided by a heat gun) in order to boot again.
- Cold Bug (CB): Below this temperature, the system will crash. Again, the system will be needed to be warmed up.
- Cold Slow: Below a certain temperature, the system will perform slowly, and cause erroneous calculations or skip frames in graphical benchmarks.
Typically an extreme overclocker prefers none of these issues to occur, allowing them to bench freely pouring in LN2 and not worrying too much about how low the temperature goes. This is referred to as benching ‘full pot’, which is not always the case. For example, the latest Haswell CPUs all have a CBB around ~-135C, although the luck of the draw means that some CPUs will CBB at -110C or -90C.
The Luck of the Draw
Not all components are created equal. Users trying to compare overclocks will have noticed this – their CPU might hit 4.5 GHz but a friend’s CPU might go to 4.8 GHz at the same voltage. This is the luck of the draw – despite the component being made in the same factory on the same process (maybe even the same silicon wafer), there is a small element of randomness in CPU production. As a result the manufacturer will bin the components – the components will be put into ‘bins’ depending on how they perform with relation to voltage. Chips that can perform at the higher speed at the lower voltage are often labeled as the faster components, and those that cannot meet the high standards of the top bin will be tested until they are suitable and will be sold as something else. Even within a bin, the silicon can perform very differently, especially at the high end.
As a result of this, depending on the nature of the architecture (some architecture ends up more varied than others), enthusiasts can end up searching through hundreds of different CPUs, GPUs or memory sticks to find the ones that perform the best. Most competitive overclockers will do this on their own money, keeping the best and selling on the ones that do not perform as well to buy more in. Competitive overclockers that have the backing from a hardware manufacturer might get access to a number of CPUs to test, or typically those that work as ‘warehouse overclockers’ for the big hardware distributors may go through thousands to pre-bin for the pre-built systems but keep aside the best for overclocking and buy them personally.
The Knowledge and The Software
Competitive overclocking is much more than having the best hardware – being able to tweak it to the best values is a skill in itself. Knowing how a system performs with changes in voltage is something that is learned more through experience than anything else, and as such groups of overclockers who compete for the same team will often meet up to discuss tactics, overclock together, or populate numerous threads on their forums with the best way of performing.
The level of tweaking to get the best performance goes beyond the hardware, so much so that competitive overclockers will also tune the software. In order to verify an overclocker’s competence, they will have to run a benchmark and submit a result. Sometimes these benchmarks can be simple affairs to verify CPU speed, whereas others are 10 minutes+ requiring careful attention to hardware settings and especially temperatures when dealing with sub-zero conditions. The benchmarks most commonly associated with competitive overclocking are SuperPi and the 3DMark range from Futuremark, known to stress the CPU and GPUs to varying levels.
When dealing with the benchmark, placing the benchmark in the right frame of software is crucial. For example, in most situations SuperPi might prefer Windows XP over Windows 7 or 8, especially when Windows XP is stripped down to the bare essentials.
Most competitive overclockers will use custom operating system builds designed to lower the memory footprint of the OS and maximize the priority of the benchmark. Part of this is to reduce the instability of the operating system which can cause the system to crash when on the bleeding edge.
Other benchmarks might require preparation, such as storage based benchmarks which may prefer different types of RAID configurations depending on the sections of the benchmark. Most commonly during some of the later versions of 3DMark overclockers will have high CPU clocks for the GPU elements of the benchmark, then use features on the motherboard to reduce the CPU speed for the heavy CPU computational elements. This all wraps under the knowledge of how the benchmark runs, and is a vital tool in competitive overclocking.
Competitive overclockers argue about the ratio between how much of overclocking is in the hardware and how much is in the software/preparation. When competing at the top, both is needed, although in the lower classifications sometimes the knowledge of the benchmark will provide a good MHz advantage if the hardware is not the best.
How to Compete
The main website where competitive overclockers compete against each other is HWBot.org, which has its origins in preening forums for overclock statistics and placing them in a database for people to compare. It has slowly evolved into a system of leagues for people to compete against others in terms of overall score (global), score relating to the hardware (hardware score), team against team, air/water against air/water, and those sponsored can pit their wits against each other in an F1 style scenario.
HWBot.org currently takes submissions from users with any hardware for a pre-defined set of benchmarks. In order to submit qualifying scores, various rules have to be followed to make the score verifiable, such as submitting a screenshot with CPU-Z and GPU-Z windows open to clarify the hardware used. 3DMark submissions can also be made with Futuremark ORB links as verification (ORB is FM’s database system).
HWBot is supported by several hardware manufacturers, including GIGABYTE. Gigabyte sees many positives from the relationship with HWBot, especially when discovering trends between manufacturers about which hardware seems to be most positive for overclockers and preferential to attempt to break records with. One evolution of the partnership with HWBot is centered around Gigabyte’s in-house overclocker, the instantly recognizable HiCookie. HiCookie has his OC Lab at Gigabyte HQ in Taiwan where he has helped develop motherboards such as the Z87X-OC and Z87X-OC Force, both for overclockers to beat each other and push world records higher.
Format of this Article
Part of my trip to Computex this year was to take a look at the OC Lab during an overclocking weekend, where competitive overclockers from around the world took advantage of a few extra days in Taiwan to meet up and use some of HiCookie’s liquid nitrogen in the process. Gigabyte was on hand to supply the space, the motherboards, the refreshment, and some rather interesting scores came up as a result. I took a lot of pictures for everyone to see, and as part of the coverage, I also managed to sequester Pieter-Jan from HWBot and Dino from Gigabyte for an interview.
- GIGABYTE AM1M-S2H Review: What Can $35 Get You?
- Z97 Mini-ITX Review at $140: ASRock, MSI and GIGABYTE
- Scheda madre Gigabyte Z97X-UD5H in video
- Scheda madre Gigabyte Z97X-UD5H in video – TVtech
- Overclocking Intel’s Core i7-4790K: Can Devil’s Canyon fix Haswell’s low clock speeds?
- GIGABYTE BRIX Pro: A Second Look at the Intel i7-4770R with Iris Pro HD 5200
- Computex 2014: GIGABYTE Gaming Motherboards
- Computex 2014: GIGABYTE Server Shows 46x Avoton C2750 Microserver in 2U
- G.Skill Overclocking World Cup at Computex 2014: $10,000 for #1
- Gigabyte SOC: schede madri ricche per l’overclocking