Discussion:
[ZapFlash] The Big Data Bottleneck: Uploading to the Cloud
Gervas Douglas
2013-01-29 17:27:26 UTC
Permalink
The Big Data Bottleneck: Uploading to the Cloud


Document ID: | Document Type: ZapFlash
By: /Jason Bloomberg/ | Posted: /January 29, 2013/


The problem with Big Data is that, well, Big Data are big. /Really/ big.
We're talking terabytes. Petabytes. Zettabytes.
Whatever's-even-bigger-bytes. And of course, we want to solve all our
Big Data challenges in the Cloud. If only we could get those
gigando-bytes /into/ the Cloud in the first place. And there's the rub.

Uploading Big Data from our internal network to the Cloud via an
Internet connection is as practical as filling a swimming pool through a
drinking straw. It doesn't matter how sophisticated our Big Data
analytics, how super-duper our Hadoopers. If we can't efficiently get
our data where we need them when we need them, we're stuck.

*Optimize the Pipe*

Fortunately, the Big Data upload problem isn't new. In fact, it's been
around for years, under the moniker Wide Area Network (WAN)
Optimization. Fortunate for us because vendors have been working on WAN
Optimization techniques for a while now, and now several of them are
repurposing those techniques to help with the Cloud.

For example, Aryaka
<http://t.ymlp270.net/eeuehadawjqjafahbatambmuu/click.php> has been
peddling WAN Optimization appliances for several years. Put one
appliance in your local data center, a second in the remote data center,
and proprietary technology moves data from one to the other at a rapid
clip. Now that the Cloud has turned their world upside down, they are
providing a distributed service at the remote end, a "mesh of network
connections" better suited to the Cloud. In other words, Aryaka is
building an offering similar to Content Delivery Networks (CDNs) like
Akamai <http://t.ymlp270.net/eeuewalawjqjakahbacambmuu/click.php>.

RainStor <http://t.ymlp270.net/eeueqarawjqjavahbazambmuu/click.php>, in
contrast, focuses primarily on a proprietary compression algorithm that
promises to squeeze data into one fortieth their original size.
Furthermore, RainStor's compressed data remain directly accessible using
standard SQL or even MapReduce on Hadoop with no storage-eating,
time-consuming reinflation.

Then there's Aspera
<http://t.ymlp270.net/eeueyazawjqjagahbatambmuu/click.php>, who's found
a sophisticated way around the limitations of the Transmission Control
Protocol (TCP) itself. After all, TCP's tiny packets and penchant for
resending them are a large part of the reason uploading Big Data over
the Internet runs like such a dog in the first place. To teach this dog
a new trick or two, Aspera transfers use one TCP port for session
initialization and control, and one User Datagram Protocol (UDP) port
for data transfer.

UDP is an older, fire-and-forget protocol that doesn't perform the
retries that provide TCP's reliability, but by combining the two
protocols, FASP achieves nearly 100% error-free data throughput. In
fact, FASP reaches the maximum transfer speed possible given the
hardware on which you deploy it, and maintains maximum available
throughput independent of network delay and packet loss. FASP also
aggregates hundreds of concurrent transfers on commodity hardware,
addressing the drinking straw problem in part by supporting hundreds of
straws at once.

CloudOpt <http://t.ymlp270.net/eeumsalawjqjarahbazambmuu/click.php> is
also a player worth mentioning. Their JetStream technology takes a
soup-to-nuts approach that combines compression and transmission
protocol optimization with advanced data deduplication, SSL
acceleration, and an ingenious approach to getting the most performance
out of cached data. Or Attunity Cloudbeam
<http://t.ymlp270.net/eeumuazawjqjarahbadambmuu/click.php>, that touts
file to Cloud upload, file to Cloud replication, and Cloud to Cloud
replication. Attunity's Managed File Transfer (MFT) incorporates a
secure DMZ architecture, security policy enforcement, guaranteed and
accelerated transfers, process automation, and audit capabilities across
each stage of the file transfer process.

Finally, there's Amazon Web Services (AWS) itself. Yes, most if not all
of the vendors discussed above can firehose data into AWS's various
storage services. But AWS also offers a simple, if decidedly low-tech
approach as well: AWS Import/Export
<http://t.ymlp270.net/eeumeaxawjqjaaahbacambmuu/click.php>. Simply ship
your big hard drives to Amazon. They'll hook them up, copy the data to
your Simple Storage Service (S3) or other storage service, and ship the
drive back when you're done. This SneakerNet or "Forklifting" approach,
believe it or not, can even be faster than some of the over-the-Internet
optimizations for certain Big Data sets, even considering the time it
takes to FedEx AWS your drives.

*On Beyond Drinking Straws*

The problem with most of the approaches above (excepting only Aspera and
Amazon's forklift) is that they make the drinking straw we're using to
fill that swimming pool better, faster, and bigger -- but we're still
filling that damn pool with a straw. So what's better than a straw? How
about many straws? If any optimization technique improves a single
connection to the Internet, then it stands to reason that establishing
many connections to your Cloud provider in parallel would multiply your
upload speed dramatically.

Fair enough, but let's think out of the box here. A fundamental Big Data
best practice is to bring your analytics to your data. The reasoning is
that it's hard to move your data but easy to move your software, so once
your data are in the Cloud, you should also run your analytics there.

But this argument also works in reverse. If your data /aren't /in the
Cloud, then it may not make sense to move them to the Cloud simply to
run your software there. Instead, bring your software to your data, even
if they're on premise.

Perish the thought, you say! We're sold on Big Data in the Cloud. We've
crunched the numbers and we know it's going to save us money, provide
more capabilities, and facilitate sharing information across our
organization and the world. Fair enough. Here's another twist for you.

Why are your Big Data sets outside the Cloud to begin with? Sure, you're
stuck with existing, legacy data sets wherever they happen to be today.
But as a rule, those don't constitute Big Data, or will cease to qualify
as being large enough to warrant the Big Data label relatively soon. By
definition, Big Data sets keep expanding exponentially, which means that
you keep creating them with generations of newfangled tools.

In fact, there are already multitudinous sources for raw Big Data, as
varied as the Big Data challenges organizations struggle with today. But
many such sources are already in the Cloud, or could be moved to the
Cloud simply. For example, clickthrough data from your Web sites. Such
data come from your Web servers, which should be in the Cloud anyway. If
your Big Data come from Web Servers scattered here and there in the
Cloud, then moving the clickthrough data to a Big Data repository for
processing can be handled in the same Cloud. No need for uploading.

What about data sources that aren't already in the Cloud? Many Big Data
streams come from instrumentation or sensors of some sort, from
seismographs underground to EKGs in hospitals to UPC scanners in
supermarkets. There's no reason why such instrumentation shouldn't pour
their raw data feeds directly to the Cloud. What good is storing a
week's worth of supermarket purchasing data on premise anyway? You'll
want to store, process, manage, and analyze those data in the Cloud, so
the sooner you get it there, the better.

*The ZapThink Take*

The only reason we have to worry about uploading Big Data to the Cloud
in the first place is because our Big Data aren't already in the Cloud.
And broadly speaking, the reason they're not already in the Cloud is
because the Cloud isn't everywhere. Instead, we think of the Cloud as
being locked away in data centers, those alien, air conditioned
facilities packed full of racks of high tech equipment.

That may be true today, but as ZapThink has discussed before
<http://t.ymlp270.net/eeummanawjqjavahbazambmuu/click.php>, there's
nothing in the definition of Cloud Computing that requires Cloud
resources to live in data centers. You might have a bit of the Cloud in
your pocket, or on your laptop, in your car
<http://t.ymlp270.net/eeumjacawjqjadahbaoambmuu/click.php>, or in your
refrigerator. For now, this vision of the Internet of Things meeting the
Cloud is mostly the stuff of science fiction. We're only now figuring
out what it means to have a ubiquitous global network of sensors, from
the aforementioned EKGs and UPC scanners to traffic cameras to home
thermostats. But the writing is on the wall. Just as we now don't think
twice about carrying supercomputers in our pockets, it's only a matter
of time until the Cloud itself is fully distributed and ubiquitous. When
that happens, the question of moving Big Data to the Cloud will be moot.
They will already be there.

Loading...