The Myth of the Lazy Miner

Every once in a while I hear some variant of this sentiment: “Mining is so damn easy. No, really f'n easy. I just sit on my lazy ass and my computer makes money for me. Seriously, so easy!” To be clear we are talking about Scrypt mining- you know, for Litecoin and other altcoins, like Doge. These are mostly people running graphics cards at home, like Bitcoin used to be, before the days of ASICs.

It's a nice fantasy, anyway.

A Reality Check, the Small Time Miner

The folks saying these things must be running small operations- maybe a single GPU or a single rig with two or three GPUs. And in those cases, I would tend to agree: the level of complexity involved at that level makes things fairly easy, at least if you are already familiar with computer hardware. But if they were maintaining a medium (or larger) size rig cluster, over any appreciable length of time, I doubt they would be saying this endeavor is a “lazy” one.

Getting Real with Multi-Rigs

In my own experience, designing, planning, purchasing, building, tweaking, and maintaining my first few rigs- an investment of my entire savings at the time- was one of the most taxing endeavors of my life, and please note i'm far from new to the PC hardware and software game. I'm talking three 80+ hour weeks, sleeping just a few hours a night, eating when I remembered to, and generally coming to the brink (and if i'm honest, at times slightly over the precipice) of complete exhaustion- mental, emotional, physical. Having my entire financial well-being invested in this project, I was clearly motivated- and these were the heady days just when Chinacoin (CNC) and Feathercoin (FTC) hitting the market at mind-blowing profitability ratios1). Aside from everything else, the amount of time involved merely in locating a working mining pool that was both not a scam and not under a denial of service attack was huge. I lost track of the number of times where, heavy-lidded and struggling to stay awake and functioning, I fell asleep in a chair in front of scrolling black-and-white cgminer status screens, only to wake up a short time later to continue the epic battle of man versus machine.

Good Neighbors and Problem Children

Piss off!

Running hardware to it's limits is always an interesting challenge, and it turns out that getting consumer-grade graphics hardware to be utterly stable is perhaps slightly out of reach of mere mortals. Of course, we're not only attempting to run the hardware as fast and as stable as possible- we are also trying to do so with the least amount of electricity, via undervolting. It makes for an interesting balancing act. Each individual GPU has it's own personality- and some of these personalities aren't exactly friendly. Some are completely undemanding and will give you no problem at all, the equivalent of a friendly wave from your unobtrusive neighbor. “Hey Bob. Yep, just making money for ya. No problem!” While others, like badly behaved children, will constantly whine, “We're going too fast! Ugh, i'm gonna be sick! Oh my god! It's soooo hot! Are we there yet? I'm going to take a nap now. What? You can't make me! You don't tell me what to do!”. It's bad enough dealing with one of these “problem children”, even more fun is when they take an entire rig down along with them during a tantrum, and as a rule, this always seems to happen about five minutes after you go to bed. Of course the most truly irredeemable of these are sent to the orphanage- the RMA department. Good luck with your new parents, kids.

Infrastructure Counts

Another factor commonly underappreciated is that of the supporting infrastructure. For the gamer-miner with a single GPU being put to the mining task while not playing the latest video game, this doesn't even come into the equation. You merely run your configuration with cgminer or equivalent when you're sleeping or otherwise not playing Battlefield 4. But when we start speaking of ten, twenty, or forty GPUs and beyond, we enter a new territory.

A very clean looking setup

Electrical

First of all, the electrical situation demands some keen attention. Unless you're already an electrician, some serious research is required. We can't just plug this stuff in willy-nilly; we begin to run into hard limits. Breakers, wire sizes, amp ratings, operating voltage, plug styles, safety margins, yes, did I mention safety? We are dealing with a force that could potentially kill us if we aren't methodical about our process. And something that could kill you always deserves at least a little pause for thought. Then there's actually physically doing it all- and usually nothing goes exactly as easily as it seems in the instruction manual. I suppose some folks simply contract this part out to someone else if they don't feel confident enough to play around in that realm. Either way, if you're smart you end up running everything on 240v (at least in the USSA), due to the increased efficiency of doing so with most high-grade power supplies, as well as the wiring advantage of halving your amperage, since you are doubling the usual voltage.

Rig Configuration

Then there's the question of how each rig will actually be physically configured. Will you buy umpteen regular PC cases hosting just a few GPUs each? Will you custom-build rig frames from aluminum stock, or perhaps order pre-made ones, a platform for six GPUs each? Will you go down some other unique or dangerous path? Two by fours and plywood? Hot glue and duct tape? Of course some people simply stack the components directly on top of the cardboard box that the motherboard comes in, and that does seem a bit lazy, if not entirely practical, at least initially. If you're really serious you are going to want risers to keep each component away from other components.

Space Considerations

And we continue on into the actual physical space considerations- how will you locate these rigs, once you figure out how you want to configure them? Industrial grade steel shelving units? Sprawled on every surface in a garage? Wobbly plastic shelves from Wam-Lart? Perhaps your rig design is stackable to begin with? If you happen to have other people that live with you, this question becomes even more tricky. Many locate in basements, some resort to attics or backrooms, or altogether remote locations. It's all a question of logistics and must be considered right along with the question of appropriate wiring and electrical supply. And not to be underestimated- when you need to service or otherwise maintain a rig that is having issues, how easy it that access going to be?

Don't let this be you.

Dealing With Heat

But, of course we're not done yet. One of the factors of this territory that I personally under-appreciated to begin with was the heat. Each GPU is like a mini electrical space heater all to itself, and when you begin to cluster many together in the same space, you get- what else- considerable heat to deal with, and that's usually not a good thing. GPUs (or PSUs or motherboards or CPUs, or more electronics for that matter) don't like to be very hot, and it likely considerably shortens their life to run them too hot for too long. I hope you are feeling like you want to be your own HVAC guy (or gal), because this absolutely needs to be dealt with. Some approaches that work with varying degrees of success: fans of all kinds, from additional 120mm computer fans located strategically between or beside the GPUs, to 20“ box fans blowing over entire rigs, to high-rpm exhaust fans sucking all the heat out a window or otherwise into a duct that goes elsewhere. We can stack functions here if we are in a northern climate with cold temperatures (aka winter) and enjoy the otherwise wasted heat as a way to heat rooms, or if we have enough rigs, an entire home. If you find that you are still running hot, you can just open a window.

Summer Is Coming

Summertime is a different story. If your electricity is cheap enough, many simply run A/C units, usually fairly powerful ones are required to compensate for the massive heat generated- that's a whole other project in itself. Air cooling may be pulled off to some extent. If you locate in a basement or other space with an already lower ambient temperature, and go nuts with fans, including exhausting hot air directly outside, you have more of a chance, but chances are the rigs may still hit the temperature thresholds and throttle back to compensate on really hot days.

The truly industrious may embark on a campaign of water-cooling each GPU. Ooh, water and expensive electrical components, sounds fun, doesn't it?

Is this still sounding lazy?

Let's Not Forget Software

We haven't even talked about software yet. Physical concerns aside- and I do recommend only high quality hardware when building dedicated mining rigs, the cheap crap will only make you pay for it sooner or later with your time and frustration- we still have to properly configure the OS, drivers, mining software and then individually tweak each GPU to find the sweet spot. Here we delve into the realm of flashing custom BIOS images for each video card, text editing configuration files, moving sliders in utilities like Trixx and Afterburner, and a lot of error and trial and freezing and crashing and rebooting, at least for a time. Fan speed, core speed, memory speed, powertune percentage, core voltage, intensity, thread concurrency, all will be fun variables to play with, for each card, many times over. If you are lucky enough to have found enough of the same GPU in stock to buy in bulk, at least those will be similar, so that does help. But good luck finding more than a few of any decent GPU in one place these days, or of circumventing the customer purchase limit before they go back out of stock again, five minutes later.

This looks pretty sketchy, but at least they are dealing with the heat!

Getting Somewhere

Okay, so let's say we got one rig built. It's modular, the GPUs are nicely spaced apart for good airflow, all the components are of high quality, all electrical considerations have been taken into account (powered risers and what have you), the OS is installed, drivers are stable, mining software runs correctly, and you've even tweaked each GPU to gain maximum hash and WU for the minimum electrical draw. You have a nice location for it, and a nice custom electrical outlet for it to plug in. Now it's time to do that build again several more times. A hint for the weary- if you are using identical hardware, (not including the GPUs) Ghost image your hard drive and replicate it on your future rigs to cut down on the toil factor. Then you merely have to tweak a few settings to get the next one running, rather than completely starting from scratch.

Maintenance

So the rigs are built, they are plugged in and running, nothing melted, and you are not yet having helicopters with FLIR hovering over your house. Congratulations! Now there's routine maintenance. Fans don't last forever, and must be lubricated every so often. I hope your GPU manufacturer made that easy for you. Keep an eye on temperatures as they are often the first clue to a failing fan, the reported RPM will not always be the first signal due to multi-fan units that only report RPM from one out of two or three total. Dust and other junk (pet hair if you have any, random detritus) builds up over time- don't stock up on canned air, at this level of commitment, just get an air compressor with an appropriate nozzle attachment.

Random Failures

Sometimes hardware fails. Statistically, the more you have, the higher chance that something will. Sometimes they fail halfheartedly, causing erratic behavior that is extremely hard to pin down to a specific component or card. You might have a melted or partially melted PCI-E connector pin at some point- who knows why these things happen in an otherwise over-specced system, but it could be on either the PSU or GPU end, sometimes both. Bone up on your RMA skills, just in case. Have spare parts already on hand if at all possible to avoid downtime.

And of course mining pools fail from time to time as well. Configure your software to have several failovers! A pool that doesn't actually fail but begins having problems on the server side can absolutely wreak havok on your rigs, as well. Some things you simply have to keep an eye on over time.

Additionally, although more rare in most areas, the internet connection sometimes fails. Consider having a backup connection, say a 3G device hooked up to an OpenWRT capable router.

I Can't Get No Relief

If you've gotten this far you know you want to simplify everything as much as humanly possible so you can get some sleep without worrying about your expensive hobby taking a siesta. Utilities like CGWatcher (and CGRemote) go a long ways towards restoring sanity to the miner, allowing many failsafes to automatically kick in when various failure situations are encountered. Even so, not all failures can be handled with software- there are always going to be times where someone will just need to press the reset button.

All of that said, enjoy the stretches of time that do sometimes occur where nothing much is happening! And don't forget to go outside once in a while.

You won't find a lot of time for this, but some.

In conclusion: lazy my ass!

Brought to you by Cryptochief.com

All articles will eventually be archived at the cryptochief.com website, now under development.

Contact the cryptochief: admin at cryptochief dot com

Mining | Hardware | Cryptocurrency | Computers | Bitcoin | How To


QR Code
QR Code the_myth_of_the_lazy_miner (generated for current page)