Andrew DeChristopher

ISP owner, climber, pool player, and full-stack engineer from Boston. OSP/RF engineering. Maps are the future.

(#6) Tue Feb 2nd, 2022 - Labbing

The home lab. Those not in the tech industry may know of it as a workshop, a studio, or even a nook. One thing is certain despite the name: creators and thinkers do their best work in their happy space. As an avid technologist and only an aspiring DIY-er, my definition falls more towards the description of a "lab". In practice this isn't anything fancy like you'd see in a mad scientist's basement or garage. For me, an unassuming cluster of tiny computers and networking gear tucked away quietly in the corner of my house.

Apart from the datacenters I manage - chugging power, fans spinning and screaming wildly - the technology I surround myself with at home is far from the cacophony of compute I've exposed my ears to for the past decade. It's comprised of Intel NUCs and other small network gear that brandish silent fans, or no fans at all. As both a professional and enthusiast, I've always enjoyed keeping up with the latest technology and trends in the fields of networking and software engineering. As a business owner, I'm burdened to fully understand the technology I implement to serve customers. I'm sure you can see the position I'm in. Time to really learn some networking protocols.

Gone are my days of putting around with webservers and databases. Old friends of mine would speak of the good ol' days of ancient basement servers crawling along, yielding only handfuls of compute power. It's the roaring 20s now, my time is spent simulating global networking technologies like BGP and anycast networking across dozens of virtual hosts and networks. The fun comes naturally even though the knowledge garnered has immense value to my career. I guess I'm lucky in that sense.

Image of small home computer lab

To achieve this I didn't need to increase the scale of what once was a few early 2000s HP and SuperMicro rack servers that consumed thousands of watts. In fact, it's quite the opposite. Moore's law has really helped us over the past few years. A single Intel NUC now has more compute power by a factor of five than my "lab" had in high school. With modern storage technology, I'm running terabytes of fast storage in less than a cubic foot of space. Virtualizing tens or hundreds of servers is attainable for 100 watts or less. What a time to be alive.

Maybe it helps having friends to lab with, too. It's like a collaborative art studio, or a shared carpenters' workshop. Bouncing ideas and concepts off of like-minded individuals makes not only for great collaboration, but innovation as well. I'm not out here claiming to be innovating anywhere in the spaces I occupy, but it's damned fun to set up a big tunneled mesh between your house and all of your friends so you can run a real BGP network and play with policy routing and ECMP. I'm just now realizing this blog definitely has a target audience and I'm a little sorry if I lost you here. Better start learning, eh?

Find your lab space and refine your skills. If you're lucky enough to truly appreciate the thing you do for a living, I'm sure you've already found it somewhat. If not, keep hunting - you'll discover it soon enough.

If you love what you're doing, you've already succeeded.
- Watsky, Never Let it Die

Live well.

(#5) Mon Jan 24th, 2022 - Colocation

Looks like I missed a day. I was going to talk about something but I ended up falling asleep last night. The 100 day blogging challenge is over. I don't think a post a day is going to be sustainable for me while keeping the subject matter palatable and interesting for the reader. Today, though, we'll talk about setting up a colocation business since that's what I've been up to the past few weeks. My aim with these posts is to be transparent about operating an ISP, and to inspire others to do the same. The world will be better when everyone can access the internet. By learning, collaborating, and experiencing different mindsets, cultures, and perspectives, we can all become better to each other and ourselves.

Fitchburg Fiber is primarily a WISP (Wireless Internet Service Provider) that's branching into the fiber business slowly. I guess you could call it a hybrid FISP (Fiber ISP). Our biggest issue at the moment is plain and simple: money. The largest cost for a new ISP, most of the time, is an upstream internet connection. Whether you're getting some simple IP service from your local fiber provider or peering via BGP with a nearby global carrier, costs in the Northeast US have been trending towards $1500 per month for an unmetered symmetrical gigabit connection. If you're in a data center the costs are almost always much lower, but I'd wager most people starting ISPs don't have data centers in their neighborhood.

We use Cogent as our upstream and pay a hair over $1300 per month for a gigabit circuit. It brings us directly back to One Summer St. in the heart of Boston about 41 miles away as the crow flies. We're 5ms away from Cloudflare, AWS, and Facebook, and 6-10ms from Google and many others. Not bad! We peer with Cogent via BGP to announce our autonomous system (AS399134) to the public internet along with 256 public IP addresses (52.124.25.0/24) that we got at auction. Since we don't have many customers at the moment, a portion of our monthly recurring costs are being paid from capital we've personally injected into the business over the past years. As we grow, this amount will shrink and we will hit and surpass equilibrium, becoming profitable. Until then, we've got a question on our hands: how do we generate more revenue to stay afloat for the least amount of money possible? My answer: colocation.

Server computer in datacenter

To the colo customer that isn't a major tech company, reliability and access are trumped by one factor most of the time: cost. We're approaching the cost model in a straightforward way: $75 per rack-unit per month for a gigabit connection. Bring your own ASN and IPs or use one of ours. Simple right? Maybe not so much.

I've personally been a colo customer for a few years now and there's more to it (from the perspective of the provider) than just putting someone's server in your rack and giving them internet. If only it were that easy. There are a few major concerns in my mind with shared space colocation:

  • Physical access to the equipment
  • Monitoring bandwidth usage of customer ports
  • Providing out of band management access
  • Keeping everything online

To harp on the first one a bit, shared space colo is difficult and requires immense amounts of trust. All of your customers are renting space in a rack per U (rack unit). This means their servers may be racked up right above or below someone else's equipment. Ensuring the environmental conditions are favorable such that one server doesn't overheat the servers above and below it is critical. There's also literally nothing stopping a customer from strolling in and unplugging someone's stuff. The answers to this problem: legal contracts, audited tap card access, and TONS of cameras. Simple!

As a new ISP, we don't have infinite bandwidth to throw around like a larger provider might seem to. This means we have to be very conservative with what types of customers we seek out and engage with. We simply can't acquire a colo customer that's going to install 24U of servers and run an Alexa top 100 website generating gigabits per second of traffic. While that'd be nice, it'd require major upgrades to our infrastructure and upstream internet connection, both of which we can't afford at the moment. Likewise we can't let large scale crypto miners in that could potentially consume many thousands of kWh per month. Power isn't free and our plans are probably priced a bit too competitively for this use case. Because of this, we're only looking for smaller businesses, hobbyists, and prosumers that may need small hosting, offsite backup, or other always-online services like that.

Sticking to the tried and true 95th percentile bandwidth paradigm seems like a safe bet for us. Most customers are allowed 20Mbps of bandwidth at the 95th percentile. They still have connections capable of a gigabit burst, but if at the end of the month their 95th percentile traffic is over 20Mbps, we simply bill them $2 per Mbps they go over. This isn't as much of a deterrent as it may seem as most customers don't even come near 1Mbps after the month is over. It simply sets an expectation and it works well enough for us. Most of our bandwidth should be allocated to our traditional residential and business customers anyways. We are an ISP after all.

Out of band management access is another managed service altogether that's fairly easy to provide and proves itself invaluable when your customers lock themselves out of their pfSense routers by somehow deleting the WAN interface. In our case, it was simple to solve. We've already got a WireGuard gateway that allows us in-band access to our management network. From there we can access WinBox, SSH, and other services on routers, switches, and servers within our corporate infrastructure. For colo customers, we're simply providing them a WireGuard configuration that allows them access to a VLAN that all of their IPMI, iDRAC, iLo, ESXi management net, etc. are plugged in to. This way they're able to fix a lockout without having to go onsite.

APC Power Distribution Unit

But what if a server or other appliance completely locks up? We all hate that dreaded ESXi purple screen of death. To solve this we're also providing remote access to the APC PDU that their hardware is powered from. It's an AP8659NA3 if you're curious. Metered and switched per port, it allows us to keep track of customer power usage all the while allowing them to log in to switch their ports on and off again to power cycle locked up systems. This isn't necessary most of the time now that IPMI and the like are run on a separate SOC attached to the server's motherboard, but it's a nice option to have in an emergency.

The last point is a tough one to hit since power reliability is an expensive problem to solve when you aren't in a T1 data center. Most places in the US simply don't have the luxury of picking from multiple power providers. In our case it's just one, a utility company called Unitil. Even if we could get multiple feeds into the building, we're still at the mercy of an outage if a line goes down outside our building or if a transformer blows down the street. To plan for this contingency, we're investing in a 7kW natural gas powered generator with an automatic transfer switch for the two circuits that power our micro data center.

Since generators often take 10-20 seconds to spool up when the ATS decides the mains power has puttered out, we also need a large enough UPS battery backup array to hold us over for that short time until the generator takes over again. The UPS will also provide power conditioning for the incoming power that may or may not be properly phased from the tiny explosions going on upstream at our generator. We think this strategy will take us a few years into the future especially since we don't need to store liquid propane or natural gas on site. We can foreseeably power the entire operation for days or weeks at a time with the grid entirely out at very little relative cost. Compared to the cost of having customers offline we'd prefer the upfront cost of the equipment. That's the power of mains gas for you. While this may all seem a bit overkill since the last reported outage was over 5 years ago during a crazy blizzard, one can never be too prepared. Especially with how our climate has been trending lately.

Moving into the future, we have plans to become completely carbon neutral and buy back carbon offsets. This will be accomplished with solar and/or wind arrays on the roof of our building with larger battery arrays indoors to store power. Hopefully the grid is more green at that point so we don't need to power a datacenter off of Mr. Sun alone. It's certainly a pipe dream but I can't wait for the day that we don't have to burn altered carbon to sustain our business when mother nature comes knocking.

Dear reader, thanks for doing so.

Live well.

(#4) Sat. Jan 22nd, 2022 - BGP Hell

Border Gateway Protocol, oh how we rarely get along. The thing about starting an ISP is that you usually don't have a lot of money flying around like the incumbents that took billions of taxpayer dollars to build sub-par networks in the 80s and 90s. With said lack of money comes the frustration of not being able to buy into the gold standard networking ecosystems like Cisco or Juniper. In our case, we decided to go with MikroTik. This is not a roast of MikroTik, as they've proven time and time again that they can compete with the big players. This is, more or less, just me shouting into the void about BGP compatibility and adhesion to RFCs between vendors.

When you graduate from single broadcast domains and into the world of routing, there's a lot to learn. What are all of these acronyms flying around? BGP? ASNs? OSPF? VRFs? Yeesh, the subject matter explodes. Luckily, everyone's got it right for the most part and any manufacturer in the network space worth their salt has implemented all of these standards.

LACP backhaul to our colocation rack

Moving towards a fully routed network brings with it some challenges, namely if you don't own a lot of IP space. Fitchburg Fiber is the proud owner of a /24 block, or 256 public IP addresses. We paid almost $7k for the block at auction back in January of 2021. Nowadays similarly sized blocks go for nearly $14k at auction. Needless to say there's very little chance we're getting more IPv4 addresses anytime soon. IPv6 is still looming around the corner waiting for more widespread adoption. When our tooling and hardware supports it to the depth it supports IPv4 we're taking the plunge, but for now the name of the game is conservation.

This brings me to my point. MikroTik is very great nearly-carrier-grade gear for budding players in the ISP game, but their BGP implementation in RouterOS v7 has consistently left a sour taste in my mouth. I just spent all day battling a spotty session with Cogent only to realize that v7 BGP doesn't like session encryption keys that are 80 characters long. Thanks Aaron from Cogent for bearing with my suffering for half an hour. I had to roll back to v6 for the time being just to re-evaluate my decisions.

I guess it's back to the lab again to build a replica of our core network and figure out where I went wrong.

Live well.

(#3) Fri. Jan 21st, 2022 - Flow

There's something about competence that's so appealing to me. I'm regularly enthralled by people that are highly skilled in some sort of dextrous action. Whether it's mundane like counting money quickly or impressive like perfectly executing a sequence of body movements to climb a rock wall, nothing in life really commands my attention like this.

When people talk about flow, it could mean thinking rapidly through a problem and making correct assumptions about it's scope and solution. Alternatively, it could mean breaking and running every ball on a pool table without a sweat. Regardless, I think I resonate more with flow in activities. I love a good think, but very little is more rewarding to me than feeling like I'm on my way to mastering a skill.

Almost everything I've been interested in has revolved around some sort of competency involving physical movements or coordination. In no specific order: archery, competitive FPS video games, ultimate frisbee, billiards, rock climbing, and Rubik's cubes (but quickly). I'm a terribly competitive person, to my demise sometimes, and I always drift towards wanting to be proficient at nearly everything I do (whether I realize it or not). It really gets to me sometimes if I feel like I'm truly lacking in proficiency. This is probably why I've leaned more towards things I can do as an individual over the years. The pressure to compare yourself to or compete against others can be unbearable, and frankly unhealthy.

When we flow, those pressures disappear and we can just focus on succeeding at a task, whether it's something we enjoy or not. I guess what I'm trying to say is that I think there's some respite to be found, via flow, from the terrible, painful struggle that this world can be.

Find something you love and get really good at it. It may help talk you off the ledge someday.

Live well.

(#2) Thu. Jan 20th, 2022 - Pathfinding

The Gunks

I'm still trying to find my bearings here. When I thought of starting a blog yesterday I really had no theme in mind that I wanted to stick to. I honestly think the topical nature of this is going to be mostly stream of consciousness. This is fine to me because it's probably just going to be me consuming it for a while. Luckily, my interests span a wide variety of subject matter. Thankfully, I find my interests interesting.

Live well.

(#1) Wed. Jan 19th, 2022 - So it begins

I guess this begins my ramblings on the internet. Or is it self-administered therapy? Who knows for sure? Not I, that's who. I've dabbled with blogging and Twitter in the past but nothing ever took. We're trying Listed now only due to its integration with Standard Notes, which I'm writing this in. Looks good so far I suppose. I like the idea of near-zero effort blogging with no distractions.

Let's see if I can write for 100 days straight. I don't expect to amass a following, in fact I'll probably be one of the four people that ever find and read this. Hey me from the future! Go to bed, you probably need it!

Last year I started an ISP with my friend, then co-worker at Facebook, Tristan Taylor. You're probably going to hear about it quite a bit here. Anyone that's known me since high school will tell you that it's been a long-standing goal of mine to operate a server hosting company or ISP. Many small ventures over the years with the odd acquaintance or friend led me to December of 2020. A random, winding conversation led to:

Tristan: Man, I wish the internet in Fitchburg wasn't so damned terrible everywhere. This town would be great if people could afford quality internet.

Me: Yeah totally, wanna start an ISP with me?

Tristan: Uhh.. yes?

Me: Great, let's get started.

Obviously paraphrased, the sentiment remains apparent here. I dove headfirst into a business venture with a stranger that shared my passion. It turns out Tristan is a pretty alright guy so somehow we're still doing this thing today. Coming from the tech side, I'd always sort of beat around the bush on the business side of things. In the past I'd constantly overlooked the bureaucratic waiting games that would inevitably plague us in Fitchburg. I'd often trivialize the effort involved with building scalable networks. Needless to say it's been a learning curve for us both.

A picture of the Fitchburg Fiber core network

With nary a customer online yet, we've weathered the storm for a year now and we've built some pretty cool things that we believe are going to help us bridge the digital divide in Fitchburg.

We can't wait to meet and support people and small businesses in the city and we're so thankful for the opportunity to actually make a tangible difference not only in peoples' bottom lines, but in outcomes too. It's no secret that those without access to technology simply aren't afforded equivalent opportunities to those with access to it. A wide range of factors could contribute to this, but to us none of them seem like hurdles impossible to clear. Fitchburg Fiber is, first and foremost, for the people.

Dear reader, thank you for doing so.

Live well.