Quantcast
Channel: resources – Noise
Viewing all 771 articles
Browse latest View live

AWS Hot Startups – February 2017

$
0
0

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-hot-startups-february-2017-2/

As we finish up the month of February, Tina Barr is back with some awesome startups.

-Ana


This month we are bringing you five innovative hot startups:

  • GumGum – Creating and popularizing the field of in-image advertising.
  • Jiobit – Smart tags to help parents keep track of kids.
  • Parsec – Offers flexibility in hardware and location for PC gamers.
  • Peloton – Revolutionizing indoor cycling and fitness classes at home.
  • Tendril – Reducing energy consumption for homeowners.

If you missed any of our January startups, make sure to check them out here.

GumGum (Santa Monica, CA)
GumGum logo1GumGum is best known for inventing and popularizing the field of in-image advertising. Founded in 2008 by Ophir Tanz, the company is on a mission to unlock the value held within the vast content produced daily via social media, editorials, and broadcasts in a variety of industries. GumGum powers campaigns across more than 2,000 premium publishers, which are seen by over 400 million users.

In-image advertising was pioneered by GumGum and has given companies a platform to deliver highly visible ads to a place where the consumer’s attention is already focused. Using image recognition technology, GumGum delivers targeted placements as contextual overlays on related pictures, as banners that fit on all screen sizes, or as In-Feed placements that blend seamlessly into the surrounding content. Using Visual Intelligence, GumGum can scour social media and broadcast TV for all images and videos related to a brand, allowing companies to gain a stronger understanding of their audience and how they are relating to that brand on social media.

GumGum relies on AWS for its Image Processing and Ad Serving operations. Using AWS infrastructure, GumGum currently processes 13 million requests per minute across the globe and generates 30 TB of new data every day. The company uses a suite of services including but not limited to Amazon EC2, Amazon S3, Amazon Kinesis, Amazon EMR, AWS Data Pipeline, and Amazon SNS. AWS edge locations allow GumGum to serve its customers in the US, Europe, Australia, and Japan and the company has plans to expand its infrastructure to Australia and APAC regions in the future.

For a look inside GumGum’s startup culture, check out their first Hackathon!

Jiobit (Chicago, IL)
Jiobit Team1
Jiobit was inspired by a real event that took place in a crowded Chicago park. A couple of summers ago, John Renaldi experienced every parent’s worst nightmare – he lost track of his then 6-year-old son in a public park for almost 30 minutes. John knew he wasn’t the only parent with this problem. After months of research, he determined that over 50% of parents have had a similar experience and an even greater percentage are actively looking for a way to prevent it.

Jiobit is the world’s smallest and longest lasting smart tag that helps parents keep track of their kids in every location – indoors and outdoors. The small device is kid-proof: lightweight, durable, and waterproof. It acts as a virtual “safety harness” as it uses a combination of Bluetooth, Wi-Fi, Multiple Cellular Networks, GPS, and sensors to provide accurate locations in real-time. Jiobit can automatically learn routes and locations, and will send parents an alert if their child does not arrive at their destination on time. The talented team of experienced engineers, designers, marketers, and parents has over 150 patents and has shipped dozens of hardware and software products worldwide.

The Jiobit team is utilizing a number of AWS services in the development of their product. Security is critical to the overall product experience, and they are over-engineering security on both the hardware and software side with the help of AWS. Jiobit is also working towards being the first child monitoring device that will have implemented an Alexa Skill via the Amazon Echo device (see here for a demo!). The devices use AWS IoT to send and receive data from the Jio Cloud over the MQTT protocol. Once data is received, they use AWS Lambda to parse the received data and take appropriate actions, including storing relevant data using Amazon DynamoDB, and sending location data to Amazon Machine Learning processing jobs.

Visit the Jiobit blog for more information.

Parsec (New York, NY)
Parsec logo large1
Parsec operates under the notion that everyone should have access to the best computing in the world because access to technology creates endless opportunities. Founded in 2016 by Benjy Boxer and Chris Dickson, Parsec aims to eliminate the burden of hardware upgrades that users frequently experience by building the technology to make a computer in the cloud available anywhere, at any time. Today, they are using their technology to enable greater flexibility in the hardware and location that PC gamers choose to play their favorite games on. Check out this interview with Benjy and our Startups team for a look at how Parsec works.

Parsec built their first product to improve the gaming experience; gamers no longer have to purchase consoles or expensive PCs to access the entertainment they love. Their low latency video streaming and networking technologies allow gamers to remotely access their gaming rig and play on any Windows, Mac, Android, or Raspberry Pi device. With the global reach of AWS, Parsec is able to deliver cloud gaming to the median user in the US and Europe with less than 30 milliseconds of network latency.

Parsec users currently have two options available to start gaming with cloud resources. They can either set up their own machines with the Parsec AMI in their region or rely on Parsec to manage everything for a seamless experience. In either case, Parsec uses the g2.2xlarge EC2 instance type. Parsec is using Amazon Elastic Block Storage to store games, Amazon DynamoDB for scalability, and Amazon EC2 for its web servers and various APIs. They also deal with a high volume of logs and take advantage of the Amazon Elasticsearch Service to analyze the data.

Be sure to check out Parsec’s blog to keep up with the latest news.

Peloton (New York, NY)
Peloton image 3
The idea for Peloton was born in 2012 when John Foley, Founder and CEO, and his wife Jill started realizing the challenge of balancing work, raising young children, and keeping up with personal fitness. This is a common challenge people face – they want to work out, but there are a lot of obstacles that stand in their way. Peloton offers a solution that enables people to join indoor cycling and fitness classes anywhere, anytime.

Peloton has created a cutting-edge indoor bike that streams up to 14 hours of live classes daily and has over 4,000 on-demand classes. Users can access live classes from world-class instructors from the convenience of their home or gym. The bike tracks progress with in-depth ride metrics and allows people to compete in real-time with other users who have taken a specific ride. The live classes even feature top DJs that play current playlists to keep users motivated.

With an aggressive marketing campaign, which has included high-visibility TV advertising, Peloton made the decision to run its entire platform in the cloud. Most recently, they ran an ad during an NFL playoff game and their rate of requests per minute to their site increased from ~2k/min to ~32.2k/min within 60 seconds. As they continue to grow and diversify, they are utilizing services such as Amazon S3 for thousands of hours of archived on-demand video content, Amazon Redshift for data warehousing, and Application Load Balancer for intelligent request routing.

Learn more about Peloton’s engineering team here.

Tendril (Denver, CO)
Tendril logo1
Tendril was founded in 2004 with the goal of helping homeowners better manage and reduce their energy consumption. Today, electric and gas utilities use Tendril’s data analytics platform on more than 140 million homes to deliver a personalized energy experience for consumers around the world. Using the latest technology in decision science and analytics, Tendril can gain access to real-time, ever evolving data about energy consumers and their homes so they can improve customer acquisition, increase engagement, and orchestrate home energy experiences. In turn, Tendril helps its customers unlock the true value of energy interactions.

AWS helps Tendril run its services globally, while scaling capacity up and down as needed, and in real-time. This has been especially important in support of Tendril’s newest solution, Orchestrated Energy, a continuous demand management platform that calculates a home’s thermal mass, predicts consumer behavior, and integrates with smart thermostats and other connected home devices. This solution allows millions of consumers to create a personalized energy plan for their home based on their individual needs.

Tendril builds and maintains most of its infrastructure services with open sources tools running on Amazon EC2 instances, while also making use of AWS services such as Elastic Load Balancing, Amazon API Gateway, Amazon CloudFront, Amazon Route 53, Amazon Simple Queue Service, and Amazon RDS for PostgreSQL.

Visit the Tendril Blog for more information!

— Tina Barr


The Economics of Hybrid Cloud Storage

$
0
0

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/hybrid-cloud-storage-economics/

“Hybrid Cloud” has jumped into the IT vernacular over the last few years. Hybrid Cloud solutions intelligently divide processing and data storage between on-premise and off-premise resources to maximize efficiency. Businesses seamlessly extend their data processing and data storage capabilities to the cloud enabling them to manage unusual or fluctuating demands for services. More recently businesses are utilizing cloud computing and storage resources for their day-to-day operations instead of building out their own infrastructure.

Companies in the media and entertainment industry are candidates for considering the hybrid cloud as on any given day these organizations ingest and process large amounts of data in the form of video and audio files. Effectively processing and storing this data is paramount in managing the cost of a project and keeping it on schedule. Below we’ll examine the data storage aspects of the hybrid cloud when considering such a solution for a media and entertainment organization.

The Classic Storage Environment

In the media and entertainment industry, much of the video and audio collected is either never used or reviewed once and then archived. A rough estimate is that 10% of all the audio and video collected is used in the various drafts produced. That means that 90% of the data is archived: stored on the local storage systems or perhaps saved off to tape. This archived data can not be deleted until the project owner agrees, an event that can take months and sometimes years.

Using local storage to keep this archived data means you have to “overbuy” your on-premise storage to accommodate the maximum amount of data you might ever need to hold. While this allows the data to be easily accessed and restored, you have to purchase or lease substantially more storage than you really need.

As a consequence, many organizations decided to use tape storage for their archived data to reduce the need for on-premise data storage. They soon discovered the hidden costs of tape systems: ongoing tape maintenance, supply costs, and continuing personnel expenses. In addition, to recover an archived video or audio file from tape was often slow, cumbersome, and fraught with error.

Hybrid Cloud Storage

Cantemo’s Media Asset Management Portal can identify and automatically route video and audio data to a storage destination – on-premise, cloud, tape, etc. – as needed. Let’s consider a model where 20% of the data ingested is needed for the duration of a given project. The remaining 80% is evaluated and then determined that it can be archived, although we might need to access a video or audio clip at a later time. What is the best destination for the Cantemo Portal to route video and audio that optimizes both cost and access? Let’s review each of our choices: on-premise disk, tape, and cloud storage.

Data Destinations

To compare the three solutions, we’ve considered the cost of each system over a five year period for: initial purchase cost, ongoing costs and supplies, maintenance costs, personnel cost for operations, and subscription costs.

  • On-Premise Disk Storage – On-premise storage can range from a 1 petabyte NAS (Network Attached Storage) system to a multi-petabyte SAN (Storage Area Network). The cost ranges from $12/terabyte/month to $20/terabyte/month (or more). These figures assume new equipment at “street” prices where available. These systems are used for instant access to the data over a high-speed network connection. The data, or a proxy, can be altered multiple times and versioning is required.
  • Tape Storage – Typically these are LTO (Linear Tape-Open) systems with a minimum of two local tape systems, operational costs, etc. The data is stored, typically in batch mode, and accessed infrequently. The tapes can be stored on-site or off-site. Off-site storage costs more. The cost for LTO tape ranges from $7/terabyte/month to $10/terabyte/month, with much of that being the ongoing operational costs. The design includes one incremental tape per day, 2-week retention, first week on-site, second week off-site, with weekly pickup/drop-off. Also included are weekly, monthly, and yearly full backups, rotated on/off site as needed for tape testing, data recovery, etc.
  • Cloud Storage – The cost of cloud storage has come down over the last few years and currently ranges from $5/terabyte/month to $25/terabyte/month for storage depending on the vendor. Video and audio stored in cloud storage is typically easy to locate and readily available for recovery if needed. In most cases, there are minimal operational costs as, for example, the Cantemo Portal software is designed to locate and recover files that are required, but not present on the on-premise storage system.

Of course, a given organization will have their own costs, but in general they should fall within the ranges noted above.

Comparing Storage Costs

In comparing costs of the different methods noted above, we’ll present three scenarios. For each scenario we’ll use data storage amounts of 100 terabytes, 1 petabyte, and 2 petabytes. Each table is the same format, all we’ve done is change how the data is distributed: on-premise, tape, or cloud. The math can be adapted for any set of numbers you wish to use.

SCENARIO 1 – 100% of data is in on-premise storage

Scenario 1 Data Stored Data Stored Data Stored
Data stored On-Premise: 100% 100 TB 1,000 TB 2,000 TB
On-premise cost range Monthly Cost Monthly Cost Monthly Cost
Low – $12/TB/Month $1,200 $12,000 $24,000
High – $20/TB/Month $2,000 $20,000 $40,000

SCENARIO 2 – 20% of data is in on-premise storage and 80% of data is on LTO Tape

Scenario 2 Data Stored Data Stored Data Stored
Data stored On-Premise: 20% 20 TB 200 TB 400 TB
Data stored Tape: 80% 80 TB 800 TB 1,600 TB
On-premise cost range Monthly Cost Monthly Cost Monthly Cost
Low – $12/TB/Month $240 $2,400 $4,800
High – $20/TB/Month $400 $4,000 $8,000
LTO Tape cost range Monthly Cost Monthly Cost Monthly Cost
Low – $7/TB/Month $560 $5,600 $11,200
High – $10/TB/Month $800 $8,000 $16,000
TOTAL Cost of Scenario 2 Monthly Cost Monthly Cost Monthly Cost
Low $800 $8,000 $16,000
High $1,200 $12,000 $24,000

Using tape to store 80% of the data can reduce the cost 33% over just using on-premise data storage.

SCENARIO 3 – 20% of data is in on-premise storage and 80% of data is in cloud storage

Scenario 3 Data Stored Data Stored Data Stored
Data stored On-Premise: 20% 20 TB 200 TB 400 TB
Data stored in Cloud: 80% 80 TB 800 TB 1,600 TB
On-premise cost range Monthly Cost Monthly Cost Monthly Cost
Low – $12/TB/Month $240 $2,400 $4,800
High – $20/TB/Month $400 $4,000 $8,000
LTO Tape cost range Monthly Cost Monthly Cost Monthly Cost
Low – $5/TB/Month $400 $4,000 $8,000
High – $25/TB/Month $2,000 $20,000 $40,000
TOTAL Cost of Scenario 3 Monthly Cost Monthly Cost Monthly Cost
Low $640 $6,400 $12,800
High $2,400 $24,000 $48,000

Storing 80% of the data in the cloud can lead a 46% savings on the low end, but could actually be more expensive depending on the vendor selected.

Separate the Costs

Often, cloud storage costs are combined with cloud computing costs in the Hybrid Cloud model, thus hiding the true cost of the cloud storage, perhaps, until it’s too late. The savings gained by using cloud computing services a few times a day may be completely offset by the high cost of cloud storage, which you would be using the entire time. Here are some recommendations.

  1. Ask to have your Hybrid Cloud costs broken out into computing and storage costs, it should be clear what you are paying for each service.
  2. Consider moving the cloud data storage cost to a low cost provider such as Backblaze B2 Cloud Storage, which charges only $5/terabyte/month for cloud storage. This is particularly useful for archived data that still needs to be accessible as Backblaze cloud storage is readily available.
  3. If compute, data distribution, and data archiving services are required, the Cantemo Portal allows you to designate different cloud storage vendors depending on the usage. For example, data requiring computing services can be stored with Amazon S3 and data designated for archiving can be stored in Backblaze. This allows you optimize access, while minimizing costs.

Considering Hybrid Data Storage

Today, most companies in the Media and Entertainment industry have large amounts of data. The hybrid cloud has the potential to change how the industry does business by moving to cloud-based platforms that allow for global collaboration around the clock. In these scenarios, the amount of data created and stored will be staggering, even by today’s standards. As a consequence, it will be paramount for you to know the most cost efficient way to store and access your data.

The latest version of Cantemo Portal includes native integration to Backblaze B2 Cloud Storage, making it easy to create custom rules for archiving to the cloud and access archived files when needed.

(Author’s note: I used on-premise throughout this document as it is the common vernacular used in the tech industry. Apologies to those grammatically offended.)

The post The Economics of Hybrid Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

ISP Blocks Pirate Bay But Vows to Fight Future Blocking Demands

$
0
0

Post Syndicated from Andy original https://torrentfreak.com/isp-blocks-pirate-bay-but-vows-to-fight-future-blocking-demands-170301/

Two weeks go after almost three years of legal battles, Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry finally achieved their dream of blocking a ‘pirate’ site.

The Patent and Market Court ordered Bredbandsbolaget, the ISP at the center of the action, to block The Pirate Bay and another defunct site, Swefilmer. A few hours ago the provider barred its subscribers from accessing them, just ahead of the Court deadline.

This pioneering legal action will almost certainly open the floodgates to similar demands in the future, but if content providers think that Bredbandsbolaget will roll over and give up, they have another thing coming.

In a statement announcing that it had complied with the orders of the court, the ISP said that despite having good reasons to appeal, it had been not allowed to do so. The provider adds that it finds it unreasonable that any provider should have to block content following pressure from private interests, so will fight all future requests.

“We are now forced to contest any future blocking demands. It is the only way for us and other Internet operators to ensure that private players should not have the last word regarding the content that should be accessible on the Internet,” Bredbandsbolaget said.

Noting that the chances of contesting a precedent-setting ruling are “small or non-existent”, the ISP added that not all providers will have the resources to fight, if they are targeted next. Fighting should be the aim though, since there are problems with the existing court order.

According to Bredbandsbolaget, the order requires it to block 100 domain names. However, the ISP says that during the trial it was not determined whether they all lead to illegal sites. In fact, it appears that some of the domains actual point to sites that are either fully legal or non-operational.

For example, in tests conducted by TF this morning the domain bay.malk.rocks led to a Minecraft forum, fattorrents.ws and magnetsearch.net/org were dead, piratewiki.info had expired, torrentdr.com was parked and ViceTorrent.com returned error 404. Also, Swefilmer.com returned a placeholder and SweHD.com was parked and for sale.

“What domains should be blocked or not blocked is therefore reliant on rightsholders’ sincerity, infallibility and the ability to make proportionate assessments,” Bredbandsbolaget warns.

“It is still unclear which body receives questions and complaints if an operator is required to mistakenly block a domain.”

In the wake of the blocking ruling two weeks ago, two other major ISPs in Sweden indicated that they too would put up a fight against blocking demands.

Bahnhof slammed the decision to block The Pirate Bay, describing the effort as signaling the “death throes” of the copyright industry.

Telia was more moderate but said it has no intention of blocking The Pirate Bay, unless it is forced to do so by law.



The full list of domains that were blocked this morning are as follows:

thepiratebay.se
thepiratebay.org
accesspiratebay.com
ahoy.one
bay.malk.rocks
baymirror.date
baymirror.win
bayproxy.date
bayproxy.pw
fastpiratebay.co.uk
fattorrents.ws
gameofbay.org
ikwilthepiratebay.org
kuiken.co
magnetsearch.net
magnetsearch.org
pbp.rocks
pbproxy.com
piraattilahti.net
pirate.trade
piratebay.click
piratebayblocked.com
piratebayproxy.tf
piratebays.co.uk
piratehole.com
pirateportal.xyz
pirateproxies.info
pirateproxies.net
pirate-proxy.info
pirateproxy.online
pirateproxy.wf
pirateproxy.vip
pirateproxy.yt
pirateproxybay.tech
pirates.pw
piratesbay.pe
piratetavern.net
piratetavern.org
piratewiki.info
proxypirate.pw
proxytpb.nl
thebay.tv
thehiddenbay.xyz
thenewbay.org
thepbproxy.website
thepiratebay.ar.com
thepiratebay.bypassed.live
thepiratebay.bypassed.red
thepiratebay.bypassed.video
thepiratebay.casa
thepiratebay.immunicity.live
thepiratebay.immunicity.video
thepiratebay.immunicity.red
thepiratebay.je
thepiratebay.lv
thepiratebay.mg
thepiratebay.red
thepiratebay.run
thepiratebay.skillproxy.com
thepiratebay.skillproxy.net
thepiratebay.skillproxy.org
thepiratebay.unblockthis.net
torrentdr.com
thepiratebay.uk.net
thepiratebay.unblocked.rocks
thepiratebay.unblocked.video
thepiratebay.unblockerproxy.xyz
thepiratebay-proxy.com
thepirateproxy.co
thepirateproxy.info
thepirateproxy.website
thepirateproxybay.xyz
theproxy.pw
theproxybay.pw
tpb.dashitz.com
tpb.patatje.eu
tpb.portalimg.com
tpb.proxyduck.co
tpb.retro.black
tpb.vrelk.com
tpbay.co
tpbmirror.us
tpbpro.xyz
tpbproxy.cc
tpbproxy.pw
tpbproxy.website
tproxy.pro
ukpirate.click
ukpirate.org
ukpirateproxy.xyz
unblockbay.com
unblockthepiratebay.net
unblockthepiratebay.org
urbanproxy.eu
vicetorrent.com
battleit.ee/tpb
thepiratebay.gg
bayproxy.org
thepirateproxybay.site
bayproxy.net
swefilmer.com
www.swefilmer.com
swehd.com
www.swehd.com

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena

$
0
0

Post Syndicated from Sai Sriparasa original https://aws.amazon.com/blogs/big-data/aws-cloudtrail-and-amazon-athena-dive-deep-to-analyze-security-compliance-and-operational-activity/

As organizations move their workloads to the cloud, audit logs provide a wealth of information on the operations, governance, and security of assets and resources. As the complexity of the workloads increases, so does the volume of audit logs being generated. It becomes increasingly difficult for organizations to analyze and understand what is happening in their accounts without a significant investment of time and resources.

AWS CloudTrail and Amazon Athena help make it easier by combining the detailed CloudTrail log files with the power of the Athena SQL engine to easily find, analyze, and respond to changes and activities in an AWS account.

AWS CloudTrail records API calls and account activities and publishes the log files to Amazon S3. Account activity is tracked as an event in the CloudTrail log file. Each event carries information such as who performed the action, when the action was done, which resources were impacted, and many more details. Multiple events are stitched together and structured in a JSON format within the CloudTrail log files.

Amazon Athena uses Apache Hive’s data definition language (DDL) to create tables and Presto, a distributed SQL engine, to run queries. Apache Hive does not natively support files in JSON, so we’ll have to use a SerDe to help Hive understand how the records should be processed. A SerDe interface is a combination of a serializer and deserializer. A deserializer helps take data and convert it into a Java object while the serializer helps convert the Java object into a usable representation.

In this blog post, we will walk through how to set up and use the recently released Amazon Athena CloudTrail SerDe to query CloudTrail log files for EC2 security group modifications, console sign-in activity, and operational account activity. This post assumes that customers already have AWS CloudTrail configured. For more information about configuring CloudTrail, see Getting Started with AWS CloudTrail in the AWS CloudTrail User Guide.

Setting up Amazon Athena

Let’s start by signing in to the Amazon Athena console and performing the following steps.

o_athena-cloudtrail_1

Create a table in the default sampledb database using the CloudTrail SerDe. The easiest way to create the table is to copy and paste the following query into the Athena query editor, modify the LOCATION value, and then run the query.

Replace:

LOCATION 's3://<Your CloudTrail s3 bucket>/AWSLogs/<optional – AWS_Account_ID>/'

with the S3 bucket where your CloudTrail log files are delivered. For example, if your CloudTrail S3 bucket is named “aws -sai-sriparasa” and you set up a log file prefix of  “/datalake/cloudtrail/” you would edit the LOCATION statement as follows:

LOCATION 's3://aws-sai-sriparasa/datalake/cloudtrail/'

CREATE EXTERNAL TABLE cloudtrail_logs (
eventversion STRING,
userIdentity STRUCT<
  type:STRING,
  principalid:STRING,
  arn:STRING,
  accountid:STRING,
  invokedby:STRING,
  accesskeyid:STRING,
  userName:STRING,
  sessioncontext:STRUCT<
    attributes:STRUCT<
      mfaauthenticated:STRING,
      creationdate:STRING>,
    sessionIssuer:STRUCT<
      type:STRING,
      principalId:STRING,
      arn:STRING,
      accountId:STRING,
      userName:STRING>>>,
eventTime STRING,
eventSource STRING,
eventName STRING,
awsRegion STRING,
sourceIpAddress STRING,
userAgent STRING,
errorCode STRING,
errorMessage STRING,
requestParameters STRING,
responseElements STRING,
additionalEventData STRING,
requestId STRING,
eventId STRING,
resources ARRAY<STRUCT<
  ARN:STRING,accountId:
  STRING,type:STRING>>,
eventType STRING,
apiVersion STRING,
readOnly STRING,
recipientAccountId STRING,
serviceEventDetails STRING,
sharedEventID STRING,
vpcEndpointId STRING
)
ROW FORMAT SERDE 'com.amazon.emr.hive.serde.CloudTrailSerde'
STORED AS INPUTFORMAT 'com.amazon.emr.cloudtrail.CloudTrailInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://<Your CloudTrail s3 bucket>/AWSLogs/<optional – AWS_Account_ID>/';

After the query has been executed, a new table named cloudtrail_logs will be added to Athena with the following table properties.

Table_properties_sai3

Athena charges you by the amount of data scanned per query.  You can save on costs and get better performance when querying CloudTrail log files by partitioning the data to the time ranges you are interested in.  For more information on pricing, see Athena pricing.  To better understand how to partition data for use in Athena, see Analyzing Data in S3 using Amazon Athena.

Popular use cases

These use cases focus on:

  • Amazon EC2 security group modifications
  • Console Sign-in activity
  • Operational account activity

EC2 security group modifications

When reviewing an operational issue or security incident for an EC2 instance, the ability to see any associated security group change is a vital part of the analysis.

For example, if an EC2 instance triggers a CloudWatch metric alarm for high CPU utilization, we can first look to see if there have been any security group changes (the addition of new security groups or the addition of ingress rules to an existing security group) that potentially create more traffic or load on the instance. To start the investigation, we need to look in the EC2 console for the network interface ID and security groups of the impacted EC2 instance. Here is an example:

Network interface ID = eni-6c5ca5a8

Security group(s) = sg-5887f224, sg-e214609e

The following query can help us dive deep into the security group analysis. We’ll configure the query to filter for our network interface ID, security groups, and a time range starting 12 hours before the alarm occurred so we’re aware of recent changes. (CloudTrail log files use the ISO 8601 data elements and interchange format for date and time representation.)

Identify any security group changes for our EC2 instance:

select eventname, useridentity.username, sourceIPAddress, eventtime, requestparameters from cloudtrail_logs
where (requestparameters like '%sg-5887f224%' or requestparameters like '%sg-e214609e%' or requestparameters like '%eni-6c5ca5a8%')
and eventtime > '2017-02-15T00:00:00Z'
order by eventtime asc;

This query returned the following results:

eventname username sourceIPAddress eventtime requestparameters
DescribeInstances 72.21.196.68 2017-02-15T00:57:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T00:57:24Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T17:06:01Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.68 2017-02-15T17:06:01Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
DescribeSecurityGroups 72.21.196.70 2017-02-15T23:28:20Z {“securityGroupSet”:{},”securityGroupIdSet”:{“items”:[{“groupId”:”sg-e214609e”}]},”filterSet”:{}}
DescribeInstances 72.21.196.69 2017-02-16T11:25:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-e214609e”}]}}]}}
DescribeInstances 72.21.196.69 2017-02-16T11:25:23Z {“instancesSet”:{},”filterSet”:{“items”:[{“name”:”instance.group-id”,”valueSet”:{“items”:[{“value”:”sg-5887f224″}]}}]}}
ModifyNetworkInterfaceAttribute bobodell 72.21.196.64 2017-02-16T19:09:55Z {“networkInterfaceId”:”eni-6c5ca5a8″,”groupSet”:{“items”:[{“groupId”:”sg-e214609e”},{“groupId”:”sg-5887f224″}]}}
AuthorizeSecurityGroupIngress bobodell 72.21.196.64 2017-02-16T19:42:02Z {“groupId”:”sg-5887f224″,”ipPermissions”:{“items”:[{“ipProtocol”:”tcp”,”fromPort”:143,”toPort”:143,”groups”:{},”ipRanges”:{“items”:[{“cidrIp”:”0.0.0.0/0″}]},”ipv6Ranges”:{},”prefixListIds”:{}},{“ipProtocol”:”tcp”,”fromPort”:143,”toPort”:143,”groups”:{},”ipRanges”:{},”ipv6Ranges”:{“items”:[{“cidrIpv6″:”::/0″}]},”prefixListIds”:{}}]}}

The results show that the ModifyNetworkInterfaceAttribute and AuthorizedSecurityGroupIngress API calls may have impacted the EC2 instance. The first call was initiated by user bobodell and set two security groups to the EC2 instance. The second call, also initiated by user bobodell,  was made approximately 33 minutes later, and successfully opened TCP port 143 (IMAP) up to the world (cidrip:0.0.0.0/0).

Although these changes may have been authorized, these details can be used to piece together a timeline of activity leading up to the alarm.

Console Sign-in activity

Whether it’s to help meet a compliance standard such as PCI, adhering to a best practice security framework such as NIST, or just wanting to better understand who is accessing your assets, auditing your login activity is vital.

The following query can help identify the AWS Management Console logins that occurred over a 24-hour period. It returns details such as user name, IP address, time of day, whether the login was from a mobile console version, and whether multi-factor authentication was used.

select useridentity.username, sourceipaddress, eventtime, additionaleventdata
from default.cloudtrail_logs
where eventname = 'ConsoleLogin'
and eventtime >= '2017-02-17T00:00:00Z'
and eventtime < '2017-02-18T00:00:00Z';

Because potentially hundreds of logins occur every day, it’s important to identify those that seem to be outside the normal course of business. The following query returns logins that occurred outside our network (72.21.0.0/24), those that occurred using a mobile console version, and those that occurred between midnight and 5:00 A.M.

select useridentity.username, sourceipaddress, json_extract_scalar(additionaleventdata, '$.MobileVersion') as MobileVersion, eventtime, additionaleventdata
from default.cloudtrail_logs
where eventname = 'ConsoleLogin'
and (json_extract_scalar(additionaleventdata, '$.MobileVersion') = 'Yes'
or sourceipaddress not like '72.21.%'
and eventtime >= '2017-02-17T00:00:00Z'
and eventtime < '2017-02-17T05:00:00Z');

Operational account activity

An important part of running workloads in AWS is understanding recurring errors, how administrators and employees are interacting with your workloads, and who or what is using root privileges in your account.

AWS event errors

Recurring error messages can be a sign of an incorrectly configured policy, the wrong permissions applied to an application, or an unknown change in your workloads. The following query shows the top 10 errors that have occurred from the start of the year.

select count (*) as TotalEvents, eventname, errorcode, errormessage
from cloudtrail_logs
where errorcode is not null
and eventtime >= '2017-01-01T00:00:00Z'
group by eventname, errorcode, errormessage
order by TotalEvents desc
limit 10;

The results show:

TotalEvents eventname errorcode errormessage
1098 DescribeAlarms ValidationException 1 validation error detected: Value ‘INVALID_FOR_SUMMARY’ at ‘stateValue’ failed to satisfy constraint: Member must satisfy enum value set: [INSUFFICIENT_DATA, ALARM, OK]
182 GetBucketPolicy NoSuchBucketPolicy The bucket policy does not exist
179 HeadBucket AccessDenied Access Denied
48 GetAccountPasswordPolicy NoSuchEntityException The Password Policy with domain name 341277845616 cannot be found.
36 GetBucketTagging NoSuchTagSet The TagSet does not exist
36 GetBucketReplication ReplicationConfigurationNotFoundError The replication configuration was not found
36 GetBucketWebsite NoSuchWebsiteConfiguration The specified bucket does not have a website configuration
32 DescribeNetworkInterfaces Client.RequestLimitExceeded Request limit exceeded.
30 GetBucketCors NoSuchCORSConfiguration The CORS configuration does not exist
30 GetBucketLifecycle NoSuchLifecycleConfiguration The lifecycle configuration does not exist

These errors might indicate an incorrectly configured CloudWatch alarm or S3 bucket policy.

Top IAM users

The following query shows the top IAM users and activities by eventname from the beginning of the year.

select count (*) as TotalEvents, useridentity.username, eventname
from cloudtrail_logs
where eventtime >= '2017-01-01T00:00:00Z'
and useridentity.type = 'IAMUser'
group by useridentity.username, eventname
order by TotalEvents desc;

The results will show the total activities initiated by each IAM user and the eventname for those activities.

Like the Console sign-in activity query in the previous section, this query could be modified to filter the activity to view only events that occurred outside of the known network or after hours.

Root activity

Another useful query is to understand how the root account and credentials are being used and which activities are being performed by root.

The following query will look at the top events initiated by root from the beginning of the year. It will show whether these were direct root activities or whether they were invoked by an AWS service (and, if so, which one) to perform an activity.

select count (*) as TotalEvents, eventname, useridentity.invokedby
from cloudtrail_logs
where eventtime >= '2017-01-01T00:00:00Z'
and useridentity.type = 'Root'
group by useridentity.username, eventname, useridentity.invokedby
order by TotalEvents desc;

Summary

 AWS CloudTrail and Amazon Athena are a powerful combination that can help organizations better understand the operations, governance, and security of assets and resources in their AWS accounts without a significant investment of time and resources.


About the Authors

 

Sai_Author_pic_resizeSai Sriparasa is a consultant with AWS Professional Services. He works with our customers to provide strategic and tactical big data solutions with an emphasis on automation, operations & security on AWS. In his spare time, he follows sports and current affairs.

 

 

 

BobO_Author_pic2_resizeBob O’Dell is a Sr. Product Manager for AWS CloudTrail. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of AWS accounts.  Bob enjoys working with customers to understand how CloudTrail can meet their needs and continue to be an integral part of their solutions going forward.  In his spare time, he enjoys spending time with HRB exploring the new world of yoga and adventuring through the Pacific Northwest.


Related

Analyzing Data in S3 using Amazon Athena

Sai_related_image

Which VPN Services Keep You Anonymous in 2017?

$
0
0

Post Syndicated from Ernesto original https://torrentfreak.com/vpn-services-anonymous-review-2017-170304/

Millions of Internet users around the world use a VPN to protect their privacy online.

Unfortunately, however, not all VPN services are as private as you might think. In fact, some are known to keep extensive logs that can easily identify specific users on their network.

This is the main reason why we have launched a yearly VPN review, asking providers about their respective logging policies as well as other security and privacy aspects. This year’s questions are as follows:

1. Do you keep ANY logs which would allow you to match an IP-address and a time stamp to a user/users of your service? If so, what information do you hold and for how long?

2. What is the registered name of the company and under what jurisdiction(s) does it operate?

3. Do you use any external visitor tracking, email providers or support tools that hold information about your users/visitors?

4. In the event you receive a takedown notice (DMCA or other), how are these handled?

5. What steps are taken when a valid court order or subpoena requires your company to identify an active user of your service? Has this ever happened?

6. Is BitTorrent and other file-sharing traffic allowed (and treated equally to other traffic) on all servers? If not, why?

7. Which payment systems do you use and how are these linked to individual user accounts?

8. What is the most secure VPN connection and encryption algorithm you would recommend to your users?

9. How do you currently handle IPv6 connections and potential IPv6 leaks? Do you provide DNS leak protection and tools such as “kill switches” if a connection drops?

10. Do you offer a custom VPN application to your users? If so, for which platforms?

11. Do you have physical control over your VPN servers and network or are they hosted by/accessible to a third party? Do you use your own DNS servers?

12. What countries are your servers located in?

Below is the list of responses from the VPN services in their own words. Providers who didn’t answer our questions directly or failed by logging extensively were excluded. We specifically chose to leave room for detailed answers where needed. The order of the list holds no value.

Private Internet Access

VPN review1. We do not store any logs relating to traffic, session, DNS or metadata. There are no logs for any person or entity to match an IP address and a timestamp to a user of our service. In other words, we do not log, period. Privacy is our policy.

2. Private Internet Access is operated by London Trust Media, Inc., with branches in the US and Iceland, which are a few of the countries that still respect privacy and do not have a mandatory data retention policy. Additionally, since we operate from the countries with the strongest of consumer protection laws, our beloved customers are able to purchase with confidence.

3. All of our VPN systems and tools are proprietary and maintained in house. We utilize some third-party tools in order to provide a better customer experience. By Q3 2017, all of these third party tools will be transitioned to in-house solutions.

4. We do not monitor our users, and we keep no logs, period. That said, we have an active, proprietary system in place to help mitigate abuse.

5. Every subpoena is scrutinized to the highest extent for compliance with both the “spirit” and “letter of the law.” While we have not received valid court orders, we periodically receive subpoenas from law enforcement agencies that we scrutinize for compliance and respond accordingly. This is all driven based upon our commitment to privacy. All this being said, we do not log and do not have any data on our customers other than their signup e-mail and account username.

6. BitTorrent and file-sharing traffic are allowed and treated equally to all other traffic (although it’s routed through a second VPN in some cases). We do not censor our traffic, period.

7. We utilize a variety of payment systems, including, but not limited to: PayPal, Credit Card (with Stripe), Amazon, Google, Bitcoin, CashU, and any major store-bought gift card and OKPay. Payment data is not linked nor linkable to user activity.

8. Currently, the most secure and practical encryption algorithm that we recommend to our users would be our cipher suite of AES-256 + RSA4096 + SHA256.

9. Yes, our users gain a plethora of additional protections, including but not limited to:

(a) Kill Switch: Ensures that traffic is routed through the VPN such that if the VPN connection is unexpectedly terminated, the traffic will not route.
(b) IPv6 Leak Protection: Protects clients from websites which may include IPv6 embeds, which could lead to IPv6 IP information coming out.
(c) DNS Leak Protection: This is built-in and ensures that DNS requests are made through the VPN on a safe, private, no-log DNS daemon.
(d) Shared IP System: We mix clients’ traffic with many other clients’ traffic through the use of an anonymous shared-IP system ensuring that our users blend in with the crowd.
(e) MACE™: Protects users from malware, trackers, and ads

10. We have custom applications to which our users have left amazing reviews. PIA has clients for the following platforms: Windows, Mac OS X, Linux, Android, iOS and a Chrome Extension (Coming soon). Additionally, users of other operating systems can connect with other protocols including OpenVPN, SOCKS5 (unencrypted), and IPSec, among others.

11. We utilize our own bare metal servers in third-party datacenters that are operated by trusted friends and, now, business partners whom we have met and on which we have completed serious due diligence. Our servers are located in facilities including 100TB, Choopa, Leaseweb, among others.

We also operate our own DNS servers on our high throughput network. These servers are private and do not log.

12. As of the beginning of 2017, We operate 3283 servers across 37 locations in 25 countries. For more information on what countries are available, please visit our network information page.

Private Internet Access website

ExpressVPN

expressvpnlogo1. ExpressVPN is an anonymous, offshore, zero-log VPN service provider. We are in the business of keeping our customers private and secure.

We do not possess information that would enable us to identify a user by an IP and timestamp produced as part of an investigation. ExpressVPN IPs are shared among customers, and we don’t have the ability to match a customer to an IP address. We designed our network to maximize privacy protection for our customers.

2. Express VPN International Ltd. is a BVI (British Virgin Islands) company. The BVI is a small, independent nation in the Caribbean renowned as an offshore jurisdiction with strict privacy regulations and no data retention laws.

3. We use 3rd party website analytics tools such as Google Analytics. We use Zendesk for support tickets and Snapengage for live chat. We believe that these are secure platforms.

Information about how you use the VPN itself (such as browsing history, traffic data or DNS queries) is never revealed to 3rd parties and is never logged or stored by ExpressVPN.

4. As we are a network service provider rather than a content host, there is nothing to take down. We also do not attempt to identify an ExpressVPN user in this case, report the user, or otherwise restrict service. Our customers should rest assured that their anonymity is protected.

5. VPN companies receive subpoenas and other legal requests as a matter of regular occurrence. This is one of the most significant advantages of our BVI jurisdiction. A court order would need to take place in the BVI for it to be legally valid. If we receive a request from another jurisdiction, we let them know that we don’t maintain logs that would enable us to match an IP address to an ExpressVPN user.

6. ExpressVPN allows all traffic including BitTorrent from all VPN servers and does not impose restrictions based on the type of traffic our users send.

7. ExpressVPN accepts all major credit cards including VISA, MasterCard and American Express. We also accept PayPal and a large number of local payment options. For users who want maximum privacy and don’t want to send us personally identifying payment information, we recommend bitcoin. In fact, we’ve written a complete guide to protecting your financial privacy with bitcoin.

8. In most cases we recommend (and default to) OpenVPN UDP. Our apps use a 4096-bit CA, AES-256-CBC encryption, TLSv1.2, and SHA512 signatures to authenticate our servers.

9. Yes, we call this leak protection feature “Network Lock”, and it is turned on by default. Network Lock prevents all types of traffic including IPv4, IPv6, and DNS from leaking outside of the VPN, such as when your Internet connection drops or in various additional scenarios where other VPNs might leak.

10. ExpressVPN has award-winning apps for Windows, Mac, iOS, Android, Linux, and routers. Our apps are designed to make it easy for users to choose a VPN location and get connected. They also offer much better security and privacy protection than manually configuring a VPN. With the ExpressVPN App for Routers, we make it easy to protect every device in your home using a VPN that is always connected.

11. Our VPN servers are hosted by trusted data centers with strong security practices. The data center employees do not have server credentials, and the server disks are fully encrypted to mitigate any risks from physical seizure. We run our own zero-knowledge DNS on every server (no 3rd party DNS).

12. ExpressVPN has thousands of high speed servers in 145 locations across 94 countries. See the full list here.

ExpressVPN website

NordVPN

nordv1. As stated in our terms of service, we do not monitor, record or store any VPN user logs. We do not store connection time stamps, used bandwidth, traffic logs, or IP addresses.

2. The registered company name is Tefincom co S.A., and it operates under the jurisdiction of Panama.

3. We use Google Analytics and a third-party ticket/live chat tools (Zendesk/Zopim). Google Analytics is used to improve our website and provide our users with the most relevant information. The ticket/live chat tool is used to provide the best support in the industry (available 24/7), but not tracking our users by any means.

4. We operate under Panama’s jurisdiction, where DMCA and similar orders have no legal bearing. Therefore, they do not apply to us.

5. If the order or subpoena is issued by a Panamanian court, we would have to provide the information if we had any. However, our zero-log policy means that we don’t have any information about our users’ online activity. So far, we haven’t had any such cases.

6. Yes, we allow P2P traffic. We have optimized a number of our servers specifically for file-sharing; ensuring other servers, which are meant for streaming and other purposes, have uninterrupted speeds. In any case, we do not engage in bandwidth throttling for P2P users.

7. Our customers can pay via credit card, PayPal and Bitcoin. We do store the standard billing information for refund purposes, but it can not be related to any Internet activity of a particular customer. Bitcoin is the most anonymous option, as we do not link the payment details with the user identity or other personal information.

8. NordVPN uses NGE (Next Generation Encryption) in IKEv2/IPsec. The ciphers used to generate Phase1 keys are AES-256-GCM for encryption, coupled with SHA2-384 to ensure integrity, combined with PFS (Perfect Forward Secrecy) using 3072-bit Diffie Hellmann keys. IKEv2 protocol is used by default in our OS X and iOS apps, and it can be manually setup on Windows and Android OS. We are also exploring possibilities to develop IKEv2 based apps for Android and Windows. At the moment, Windows and Android apps are using AES-256-CBC encryption with 2048-bit key.

9. Yes, we do provide both an automatic app-level kill switch and a feature for DNS leak protection. Our OS X, Windows, iOS and Android apps have IPv6 leak protection implemented. NordVPN service will not leak IPv6 address.

10. We have custom VPN applications for Windows, MacOS, Android, and iOS. All NordVPN apps are very easy to install and use, even with no previous experience with VPN services.

11. We use a hybrid model, whereby we control some of our servers but also partner with premium data centers with strong security practices. Furthermore, due to our special server configuration, no one can retain or collect any data. All servers have been set up with a zero logs policy. We do have specific requirements for network providers to ensure highest service quality for our customers. We do have our own DNS servers, and all DNS requests go through those.

12. At the moment, we have 741 servers in 58 countries. You can find the full list here.

NordVPN user reviews

TorGuard

VPN review1. No logs or time stamps are kept whatsoever. TorGuard does not store any traffic logs or user session data on our network. In addition to a strict no-logging policy we run a shared IP configuration across all servers. Because there are no logs kept and multiple users share a single IP address, it is not possible to match any user with an IP and time stamp.

2. TorGuard is owned and operated by VPNetworks LLC under US jurisdiction, with our parent company VPNetworks LTD, LLC based in Nevis.

3. We use anonymized Google Analytics data to optimize our website and Sendgrid for transactional email. TorGuard’s 24/7 live chat services are provided through Livechatinc’s platform. Customer support desk requests are maintained by TorGuard’s own private ticketing system.

4. In the event a valid DMCA notice is received it is immediately processed by our abuse team. Due to our no log and no time stamp policy and shared IP network – we are unable to forward any requests to a single user.

5. If a court order is received, it is first handled by our legal team and examined for validity in our jurisdiction. Should it be deemed valid, our legal representation would be forced to further explain the nature of our network and shared IP configuration and the fact that we do not hold any identifying logs or time stamps to pinpoint any specific user. We have never been able to identify any active user from an IP and time stamp.

6. Yes, BitTorrent and all P2P traffic is allowed. By default we do not block or limit any types of traffic across our network.

7. We currently offer over 200 different payment options. This includes all forms of credit card, PayPal, Bitcoin, altcoins (e.g. Ether, litecoin + more), Alipay, UnionPay, CashU, 100+ Gift Card brands, and many other methods local payment options. No user can be linked back to a billing account because we maintain zero logs across our network.

8. For best security, we advise clients to use OpenVPN and select the cipher option AES-256-CBC, with 4096bit RSA and SHA512 HMAC. We use TLS 1.2 on all servers with perfect forward secrecy enabled. For faster speeds and “obfuscated” Stealth VPN access, we suggest using OpenConnect SSL VPN with cipher option AES-256-GCM. TorGuard offers a wide range of VPN protocols, including OpenVPN, L2TP, IPsec, SSTP, OpenConnect/AnyConnect (SSL VPN), and iKEV2 – we still offer PPTP for those of you who need it, but we don’t recommend it.

9. TorGuard’s VPN software provides strict security features by automatically disabling IPv6 and blocking any potential DNS or WebRTC leaks. We offer a full connection kill switch that safeguards your VPN traffic against accidental disconnects and can hard kill your interfaces if needed, and an application kill switch that can terminate specific apps if the VPN connection is interrupted for additional safety. All recommended security features are enabled the moment you install TorGuard to ensure by default you have max security while tunneling through our network.

10. TorGuard’s popular VPN client is available for all versions of Windows, Mac OSX, Linux, Android, and iOS. We also offer easy DDWRT and Tomato setup tools for VPN routers, and a Firefox/Chrome SSL proxy app. To stay up to date with current security threats, our VPN software is actively developed and constantly evolving.

11. We retain full physical control over all hardware and only seek partnerships with data centers who can meet our strict security criteria. All servers are deployed and managed exclusively by TorGuard staff. Because there are no logs kept on any TorGuard VPN and Proxy servers, there is no risk of data theft should a machine become seized.

TorGuard VPN apps default to using internal secure no-log DNS servers that run on each VPN endpoint. We suggest this configuration for highest levels of privacy, however, clients can customize their DNS settings and choose from zero log TorGuard public DNS, Google DNS, Level3, or a customized DNS entry of their choosing.

12. TorGuard currently maintains thousands of servers in over 53 countries around the world, and we continue to expand the network every month. All customers get full access to our network.

TorGuard Reviews

Anonymizer

anony1. Anonymizer does not log ANY traffic that traverses our system, ever. We do not maintain any logs that would allow you to match an IP-address and time stamp to a user of our service.

2. Our company is registered as Anonymizer Inc. Anonymizer Inc. operates under U.S. jurisdiction where there are no data retention laws.

3. Anonymizer uses a ticketing system for support but does not request user verification unless it is needed specifically in support of a ticket. Anonymizer uses a bulk email service for email marketing but does not store any details on the individual email address that would connect them to being an existing customer.

Anonymizer uses Google Analytics and Google AdWords to support general marketing to new customers. Both of these tools do not store identifiable information on any unique customer or any way to identify a specific individual as a user of our service. We also actively ensure no link is created from the data in either system to any specific customer following a trial or purchase of our product.

4. Since Anonymizer does not log any traffic that comes over our system, we have nothing to provide in response to DMCA requests. None of our users have ever been issued a DMCA takedown notice or the European equivalent. We’ve been around for over two decades – making us one of the oldest services out there – and we’ve never turned over information of that kind.

5. Anonymizer Inc. is required by law to respond to all valid court orders and subpoenas. Since we do not log any traffic that comes over our system, we have nothing to provide in response to requests associated with service use. If a user paid by credit card we can only confirm that they purchased access to our service.

There is, and would be, no way to connect a specific user to specific traffic ever. There have been instances where we did receive valid court orders and followed the procedures above. In our 20 years of service, we have never identified details about a customer’s traffic or activities.

6. All traffic is allowed on all of our servers, so long as it complies with our EULA and Terms of Service.

7. Anonymizer Inc. uses a payment processor for our credit card payments. There is a record of the payment for the service and the billing information associated with the credit card confirming the service has been paid for. We also offer a cash payment option. Cash payment options do not store any details.

8. We would recommend OpenVPN for a user that is looking for the most secure connection. We feel it is the most reliable and stable connection protocol currently. Our OpenVPN implementation uses AES-256. We also offer L2TP/IPSEC.

9. Anonymizer’s client software does not support IPv6 connections. All customers are asked to disable IPv6 connections for the application to function. Our client software does have the option to enable a kill switch that prevents any web traffic from exiting your machine without going through the VPN.

10. We offer a custom VPN application for MacOS and Windows. Our default application log only logs fatal errors that occur within the application which prevents the application from running.

11. We own ALL of our hardware and have full physical control of our servers. No third party has access to our environment. We operate our own DNS servers.

12. We have servers in the United States and Netherlands.

Anonymizer website

Ipredator

1. No logs are retained that would allow the correlation of a user’s IP address to a VPN address. The session database does not include the origin IP address of the user. Once a connection has been terminated the session information is deleted from the session database.

2. The name of the company is PrivActually Ltd. which operates out of Cyprus.

3. We do not use any visitor tracking mechanism, not even passive ones analyzing the webserver logs. We run our own mail infrastructure and do not use 3rd party products like Gmail. Neither do we use data hogs like a ticket system to manage support requests. We stick to a simple mail system and delete old data after three months from our mail boxes.

4. The staff forwards DMCA notices to the BOFH Notices sent via paper are usually converted into energy by combustion … to power the data center in the basement where the BOFH lives. Digital SPAM^WDMCA notices are looped back into the kernel to increase the VPNs /dev/random devices entropy.

5. We evaluate the request according to the legal frameworks set forth in the jurisdictions we operate in and react accordingly. We had multiple cases where somebody tried but did not succeed to identify active users on the system.

6. Besides filtering SMTP on port 25 we do not impose any restrictions on protocols our users can use on the VPN, quite the contrary. We believe our role is to provide a net-neutral internet access. Every user is free to share his/her/its files. We are conservative people and firmly believe in the heritage of our society, which was built upon the free exchange of cultural knowledge. This new age patent system, and the idea that we need companies who milk creators are simply alien to us.

7. We offer PayPal, Bitcoins, Payza, and Payson fully integrated. OkPay, Transferwise, WU, PerfectMoney, Webmoney, Amazon Giftcards, Cash and Credit Cards on request. An internal transaction ID is used to line payments to their payment processors. We do not store any other data about payments associated with the user’s account.

8. We provide up to date config files and enforce TLS1.2 for the control channel on all supported systems. For further protection, we provide detailed setup instructions for our users. Besides the public and VPN internal DNS servers we also support DNSCrypt as a means to encrypt DNS requests. Howto’s for kill switches are available as well. We do not enforce a particular client.

9. Users can connect to a dual stack VPN pool that provides IPv4 as well as IPv6 connectivity. Unfortunately enabling IPv6 for all clients still breaks quite a few setups. Hopefully broader adoption of the OpenVPN 2.4 branch will allow us to work properly. Users can use this page to check for a number of leaks.

Kill switches that provide protection from connection drops are part of the client installation. There is not much we can do against that on the server side. If the user’s client of choice has built-in support for kill switches, he/she can just use that. If people use the vanilla OpenVPN client, the up/down script hooks provide everything needed to handle custom configs to terminate applications when the VPN connection drops.

DNS and IPv6 leaks are just two issues among many that users face in their quest for online privacy. Most privacy issues cannot be easily fixed by the VPN provider itself, but require knowledge and diligence of the users themselves. We therefore ask our users to go through our interactive checklist to improve their online piracy.

10. No, we do not offer a custom VPN application to our users. Users are free to choose which client they want to use. We think that giving users a closed source client is against our core principles.

11. We own our complete setup, network, and data center with everything in it – no 3rd parties are allowed access. We do not trust in 3rd parties operating our core infrastructure.

There are dedicated DNS servers that are given to clients for resolving DNS queries from within the VPN. Furthermore, we encourage users to use DNScrypt or similar technologies. Ideally splitting their DNS queries over multiple DNScrypt instances and running a local resolver to minimize DNS requests in the first place.

12. They are in Sweden due to the laws that allow us to run our service in a privacy-protecting manner. In times where basically everyone in the VPN market is advertising with servers in a gazillion countries, this might seem like a disadvantage. We see this very differently.

The core for any privacy service is trust in the integrity of the underlying infrastructure. Everything else has to build upon that. There is no way we could run such a tight ship and controlled environment with servers all over the world, and we will not compromise on the quality of our setup.

Ipredator website

SlickVPN

1. SlickVPN does not log any traffic nor session data of any kind.

2. Slick Networks, Inc. is our recognized corporate name. We operate a complex business structure with multiple layers of Offshore Holding Companies, Subsidiary Holding Companies, and finally some Operating Companies to help protect our interests. The main marketing entity for our business is based in the United States of America and an operational entity is based out of Nevis.

3. We utilize third party email systems to contact clients who opt in for our newsletters and Google Analytics for basic website traffic monitoring and troubleshooting.

4. If a valid DMCA complaint is received while the offending connection is still active, we stop the session and notify the active user of that session. Otherwise, we are unable to act on any complaint as we have no way of tracking down the user. It is important to note that we ALMOST NEVER receive a VALID DMCA complaint while a user is still in an active session.

5. This has never happened in the history of our company. Our customer’s privacy is of top most importance to us. We are required to comply with all valid court orders. We would proceed with the court order with complete transparency, but we have no data to provide any court in any jurisdiction. We would not rule out relocating our businesses to a new jurisdiction if required.

6. Yes, all traffic is allowed.

7. We accept PayPal, Credit Cards, Bitcoin, Cash, and Money Orders. We keep user authentication and billing information on independent platforms. One platform is operated out of the United States of America and the other platform is operated out of Nevis. We offer the ability for the customer to permanently delete their payment information from our servers at any point. All customer data is automatically removed from our records shortly after the customer ceases being a paying member.

8. We recommend using OpenVPN if at all possible (available for Windows, Apple, Linux, iOS, Android) and it uses the AES-256-CBC algorithm for encryption.

9. Our Windows and Mac client disable IPv6 as part of our IP and DNS leak protection. Our IP leak protection proactively keeps your IPv4 and IPv6 traffic from leaking to untrusted networks. Your network will be disabled if you lose the connection to our servers and the only way to restore the network is manual intervention by the user.

10. Yes. Our users are provided with a custom client, designed by our in-house engineers. Currently, the client works with Windows and Mac products. Our client does NOT store logs on customer computers by default. We also provide guides for every other platform.

11. We run a mix. We physically control some of our server locations where we have a heavier load. Other locations are hosted with third parties unless there is enough demand in that location to justify racking our own server setup. To ensure redundancy, we host with multiple providers in each location. We have server locations in over forty countries.

In all cases, our network nodes load over our encrypted network stack and run from ramdisk. Anyone taking control of the server would have no usable data on the disk. We run an algorithm to randomly reboot each server on a regular basis so we can clear the ramdisk. DNS is assigned by the server when a user logs in.

12. At SlickVPN we actually go through the expense of putting a physical server in each country that we list. SlickVPN offers service in 40 countries around the world

SlickVPN reviews

Mullvad

VPN review1. No.

2. Amagicom AB, Sweden.

3. We have no external elements at all on our website. We do use external email and encourage people who send us email to use PGP encryption, which is the only effective way to keep email somewhat private. The decrypted content is only available to us.

4. There is no such Swedish law that applies to us.

5. We get requests from governments from time to time. They never get any information about our users. We make sure not to store sensitive information that can be tied to publicly available information so that we have nothing to give out. We believe it is not possible in Swedish law to construct a court order that would compel us to actually give out information about our users. Not that we would anyway. We started this service for political reasons and would rather discontinue it than having it work against its purpose.

6. We do not block or throttle BitTorrent or other file-sharing protocols. All traffic is treated equally.

7. We explain that in more detail here, but we offer Bank Wire, Swish, PayPal (CreditCards), Bitcoin and cash. Cash and Bitcoin are the most anonymous. We run our own full Bitcoin node and don’t use third parties for any step in the bitcoin payment process, from the generation of QR codes to adding time to accounts.

8. OpenVPN, AES256, handshake encryption RSA-2048.

9. We offer the option to tunnel or not tunnel IPv6 (if not – IPv6 is blocked), and the kill-switch and DNS leak protection works the same for IPv6 as IPv4. There is both a kill switch in our client and a SOCK5 proxy that is only accessible via our VPN (i.e. if you set your browser to use it, the browser will not work if the VPN is down).

10. Yes: Windows, Mac, Linux

11. We have physical control at four sites. Three in Sweden and one in Amsterdam (I.e. all servers in Sweden and Amsterdam). The rest is hosted by carefully selected providers. Yes, we use our own DNS servers.

12. Australia, Austria, Belgium, Bulgaria, Canada, Czech Rep., Denmark, Germany, Lithuania, Israel, Italy, Netherlands, Norway, Romania, Singapore, Spain, Sweden, Switzerland, UK, USA

An up to date list is available here.

Mullvad website

BlackVPN

VPN review1. No. We purge all this information when the user disconnects from the VPN.

2. The name of the company is BLACKVPN LIMITED and is registered in Hong Kong and operates under the jurisdiction of Hong Kong.

3. We run our own email server plus support and live chat systems using open source tools. We use StreamSend for sending generic welcome and renewal reminder emails, as well as for the occasional news updates. We have Twitter widgets on our frontpage that may track visitors. We use Google Analytics as well as our own website analytics (Piwik).

4. We block the port on the server listed in the notice.

5. If we received a valid court order from a Hong Kong court, then we would be legally obliged to obey it. So far this has never happened.

6. Bittorrent traffic is not restricted in our Privacy VPN locations, but due to stricter enforcement of DMA notices in the USA and UK we restrict most BitTorrent traffic and only whitelist torrents of open source software.

7. PayPal, Bitcoin and PaymentWall (for Credit Cards and Bank Transfers). The transaction details (ID, time, amount, etc) are linked to each user account.

8. We recommend to use OpenVPN 2.4 and we support the new GCM cipher mode (AES-256-GCM) together with 4096 bit RSA and Diffie Hellman keys. With OpenVPN, we also enforce DHE/ECDHE enabled cipher suites and key exchange is done with Diffie-Hellman, providing forward secrecy.

9. For OpenVPN, we stop IPv6 leaks with the OpenVPN config, and we also disable and blackhole all IPv6 traffic server side. The open source OpenVPN client has DNS leak prevention built in and in most cases will not leak data during reconnections. Our upcoming custom VPN app will be able to provide 100% IPV6 and DNS leak protection client side and will also have a “kill switch”.

10. We have a custom open source Android app and we are working on custom Windows/MacOS app aswell. For the moment we build pre-configured versions of the open source OpenVPN clients for Windows and MacOS.

11. We use dedicated servers which are hosted in 3rd party data centers, but they do not have access to login or manage the server. We run our own DNS servers which do not save any logs.

12. USA, UK, Australia, Brazil, Canada, Czech Republic, Estonia, France, Germany, Japan, Lithuania, Luxembourg, Netherlands, Norway, Romania, Russia, Spain, Switzerland and Ukraine.

BlackVPN website

VPNArea

vpnarea1. We do not keep or record any logs. We’re therefore not able to match an IP-address and a time stamp to a user of our service. We also do not keep or record any usage logs.

2. The registered name of our company is “Offshore Security EOOD” (spelled “ОФШОР СЕКЮРИТИ ЕООД” in Bulgarian). We’re a VAT registered business. We operate under the jurisdiction of Bulgaria.

3. The only external tool we use is Zopim LiveChat. Our email system is hosted on our own servers in Switzerland. We use Email and OsTickets for support which are hosted on our own servers in Switzerland. We also offer Skype as a support option.

4. DMCA notices are not forwarded to our members as we’re unable to identify a responsible user due to not having any logs. We would reply to the DMCA notices explaining that we do not host or hold any copyrighted content ourselves and we’re not able to identify or penalize a user of our service.

5. This has not happened yet. Shall it happen our attorney will examine the validity of the court order in accordance with our jurisdiction, we will then delegate our no logs policy to the appropriate party pointing out that we’re not able to match a user to an IP or timestamp due to not keeping or recording any logs.

6. BitTorrent/P2P is allowed on most of our servers but not all of them. Why not? Some servers that we use are not tolerant to DMCA notices, but some of our members utilize them for other activities not related to Torrenting. That is why we keep them in our network despite the inability to use P2P/torrents on them. Most of our VPN servers and locations do allow torrents and P2P.

7. We accept PayPal, Credit/Debit cards and Webmoney via 3rd party payment processor, Bitcoin, Payza. We do not require personal details to register an account with us. In the case of Bitcoin payments, we do not link users to transactions. In the case of PayPal/Payza/Card payments we link usernames to their transactions so we can process a refund. We do not have recurring payments system.

8. We use AES-256-CBC + RSA2048 + SHA256 cipher on all our VPN servers without exception. We also have Double VPN servers, where for example the traffic goes through Russia and Israel before reaching the final destination.

9. In both our Windows and Mac software we have the optional setting to disable IPv6 connectivity on the computer to prevent IPv6 leaks. We have DNS leak protection as an optional setting in our Windows, Mac and Android apps. We have Killswitch in our Windows and Mac software.

10. We do have custom VPN applications for Windows, Mac, Android. We’ve custom app for iOS too, which servers as a helper tool for “OpenVPN Connect”.

11. We work with reliable and established data centers. Nobody but us has virtual access to our servers. The entire logs directories are wiped out and disabled, rendering possible physical brute force access to the servers useless in terms of identifying users.

12. We currently have servers in 65 countries.

VPNArea website

IPVanish

ipvanish1. IPVanish is a no log VPN.

2. Mudhook Marketing, Inc. The State of Florida

3. We use basic inbound marketing tools like Google Analytics, but we do not track or store personally identifiable information (PII) from these tools. We also do not track the browsing activities of users who are logged into our VPN service.

4. We do not store, host, stream or provide any content, media, images or files that would be subject to a properly formed takedown notice.

5. First, any request has to be a valid and lawful request before we will even acknowledge the request. If the request is for user data or identification of a subscriber based on an IP address, we inform the agency making the request that we do not keep any logs and we operate in a Jurisdiction that does not require mandatory data retention.

Sometimes, legal agencies or authorities may not be happy with this response. We politely remind them that IPVanish operates within the letter of the law and is a valid and needed service to protect the privacy of its subscribers.

6. Yes, BitTorrent and other file-sharing traffic is allowed.

7. Bitcoin, PayPal, and all major credit cards are accepted. Payments and service use are in no way linked.

8. We recommend OpenVPN with 256 bit AES as the most secure VPN connection and encryption algorithm.

9. IPVanish has a Kill Switch feature that terminates all network traffic to prevent any DNS leaks in the event your VPN connection drops. We also have a user-enabled option that automatically changes your IP address randomly at selected time intervals. We currently do not support IPv6. This will be rolled in with an upcoming update. All traffic is forced over IPv4 to prevent IP leaks.

10. We offer a custom VPN application for iOS, Android, Windows, and Mac. IPVanish is also configurable with DD-WRT and Tomato routers (pre-configured routers available), gaming consoles, Ubuntu and Chromebook.

11. We own and have physical control over our entire operational infrastructure, including the servers. Unlike other VPN services, we actually own and operate a global IP network backbone optimized for VPN delivery which insures the fastest speeds of any VPN provider.

12. We have servers in over 60 countries including the US, Australia, United Kingdom, Canada and more. You can view the complete list on our VPN servers page.

IPVanish website

IVPN

ivpn1. No, not doing so is fundamental to any privacy service regardless of the security or policies implemented to protect the log data. In addition, it is not within our interest to do so as it would increase our liability and is not required by the laws of any jurisdiction that IVPN operates in.

2. Privatus Limited, Gibraltar.

3. No. We made a strategic decision from day one that no company or customer data would ever be stored on 3rd party systems. Our customer support software, email, web analytics (Piwik), issue tracker, monitoring servers, code repos, configuration management servers etc all run on our own dedicated servers that we setup, configure and manage. No 3rd parties have access to our servers or data.

4. Our legal department sends a reply stating that we do not store content on our servers and that our VPN servers act only as a conduit for data. In addition, we inform them that we never store the IP addresses of customers connected to our network nor are we legally required to do so.

5. Firstly, this has never happened. However, if asked to identify a customer based on a timestamp and/or IP address then we would reply factually that we do not store this information, so we are unable to provide it. If they provide us with an email address and we are asked for the customer’s identity, then we would reply that we do not store any personal data.

If the company is served with a valid court order that did not breach the Data Protection Act 2004, we could only confirm that an email address was or was not associated with an active account at the time in question.

6. Yes, all file sharing traffic is permitted and treated equally on all servers. We do encourage customers to use non-USA based exit servers for P2P as any company receiving a large number of DMCA notices is exposing themselves to legal action and our upstream providers have threatened to disconnect our servers in the past.

7. We accept Bitcoin, Cash, PayPal and credit cards. When using cash, there is no link to a user account within our system. When using Bitcoin, we store the Bitcoin transaction ID in our system. If you wish to remain anonymous to IVPN you should take the necessary precautions when purchasing Bitcoin. When paying with PayPal or a credit card a token is stored that is used to process recurring payments. This information is deleted immediately when an account is terminated.

8. We provide RSA-4096 / AES-256 with OpenVPN, which we believe is more than secure enough for our customers’ needs. If you are the target of a state level adversary or other such well-funded body you should be far more concerned with increasing your general opsec (e.g. $5 wrench – https://xkcd.com/538/) than worrying about 2048 vs 4096 bit keys.

9. This is a huge problem for most VPN providers as shown by the comprehensive tests undertaken at VPNtesting.info (IVPN sponsored this project).

The IVPN client offers an advanced VPN firewall that blocks every type of IP leak possible including IPv6, DNS, network failures, WebRTC STUN etc.). It is impossible to any data to leak if a connection drops as the firewall will not deactivate until explicitly instructed to do so. It also has an ‘always on’ mode that will be activated on boot before any process on the computer starts to ensure than no packets are ever able to leak outside of the VPN tunnel, regardless of the connection state of the VPN.

10. Yes, we offer a custom OpenVPN client for Windows and MacOS which includes our advanced VPN firewall that blocks every type of possible IP leak. We have also recently released an iOS app and plan to release an Android version later this year.

11. We use bare metal dedicated servers leased from 3rd party data centers in each country where we have a presence. We install each server using our own custom images and employ full disk encryption to ensure that if a server is ever seized the data is worthless.

We also operate an exclusive multi-hop network allowing customers to choose an entry and exit server in different jurisdictions which would make the task of legally gaining access to servers at the same time significantly more difficult. We run our own network of log free DNS servers that are only accessible to our customers.

12. A full list is available here.

IVPN website

LiquidVPN

liquid1. No we do not store any logs that could be used to match an IP address and timestamp back to a LiquidVPN user.

2. LiquidVPN INC. Cheyenne, Wyoming

3. We use Google Analytics on our front end web site. Everything else is self-hosted.

4. If the data center requires us to answer DMCA complaints, then we let them know that these files are not hosted locally and that because we do not keep logs on user activity it is impossible for us to investigate the DMCA complaint further.

5. No we have not received any court orders. We would have to explain to law enforcement that the only way we could provide information about a user on our network was if they were able to provide us with enough information to identify the user in our system. Basically they would need to provide billing information or the users registered email address.

If they were able to provide this information we would be required to hand over the user’s email address, registered first name and transactional information. There is no other way to identify a user on our system. We would publish any correspondence from law enforcement to our transparency section on the website and if we were not allowed to do that we would stop updating our Warrant Canary.

6. All file sharing traffic is allowed and given equal priority on any server within our network.

7. For anonymity, we recommend bitcoin which requires a first name and email address only. We accept PayPal which requires a first name and email address. Finally, when a user pays via credit card their address, first name and email address is required.

8. I would recommend users connect to any of our OpenVPN servers because they use 256 Bit AES / Camellia, 4096 Bit RSA keys, they use TLS-DHE-RSA-AES-256-CBC-SHA, SHA2 HMAC digest (SHA512) if they want added privacy we would recommend using IP Modulation which randomly modifies the source public IP address per packet on all of a user’s traffic.

9. IPv6 support is on the roadmap for this year. Until its fully supported IPv6 leaks are blocked via our client. We do provide DNS leak protection and a full on VPN firewall that goes well beyond the protection from a standard VPN killswitch.

10. Our custom applications work for Windows, Mac and Android.

11. All of our VPN servers are bare metal servers that we control. Our servers are not accessible by anyone except us. We do provide private DNS servers and SmartDNS for free. Users can access USA and UK content from any server on our network.

12. We have servers in 17 data centers and 11 countries in North America, Europe and Asia.

LiquidVPN website

SmartVPN

smartvpn
1. We don’t have enough space on our servers PoPs to keep logs (True story).

2. The company name is Anonymous SARL and operates under the jurisdiction of the Kingdom of Morocco.

3. We use Google Analytics and Tawk live support.

4. What about ignoring them? Since there is nothing to takedown.

5. This has never happened before, but we won’t be able to cater to their demand as we can’t identify that user within our system.

6. BitTorrent and other P2P protocols are allowed on all our servers.

7. We use BitPay (BitCoins) and PayPal

8. We recommend OpenVPN for Desktop and IKEv2 for Mobile devices. For encryption we use the AES-256-CBC algorithm. DNS leak protection is already enabled however “kill switches” will be available soon.

9. We don’t provide IPv6 support as of now.

10. We provide a custom VPN application for Mac and Windows-based on OpenVPN, and Mobile apps (Android and iOS) based on IKEv2.

11. We have a mix. Physical control over most of our infrastructure and some exotic locations are hosted by 3rd party partners.

12. A full list is available here.

SmartVPN website

PrivateVPN

privatevpn1. We do not keep ANY logs that allow us or a third party to match an IP address and a time stamp to a user of our service. We highly value the privacy of our customers.

2. Privat Kommunikation Sverige AB and we operate under Swedish jurisdiction.

3. We use a service from Provide Support (ToS) for live support. They do not hold any information about the chat session. From Provide support: Chat conversation transcripts are not stored on Provide Support chat servers. They remain on the chat server for the duration of the chat session, then optionally sent by email according to the user account settings, and then destroyed.

We’re also using Google Analytics and Statcounter for collecting static of how many visitors we have, popular pages and conversion of all ads. This data is used for optimization of the website and advertising.

4. We’ll say that we don’t store any logs of our customers’ activity. Privacy and anonymity of our customers are something we really value and due to our non-logging policy, DMCA notices will be ignored.

5. Due to our policy of NOT keeping any logs, there is nothing to provide about users of our service. To clarify, we do not log or have any data on our customer’s activities. We have never received any court order.

6. Yes, we allow Torrent traffic on all servers. All traffic is treated equally and we do not, under any circumstances, throttle our traffic. We buy high-capacity internet traffic so we can meet the demands. On some locations, we use Tier1 IP transit providers for best speed and routing to other peers.

7. PayPal, Stripe and Bitcoin. Every payment has an order number, which is always linked to a user. Otherwise, we would not know who has made a payment. To be clear, no one can link a payment to an IP address you get from our service or online user activity.

8. OpenVPN TUN with AES-256. On top is a 2048-bit DH key

9. For our Windows VPN client, we have a feature called “Connection guard”, which will close a selected program(s) if the connection drops. We have no tools yet for DNS leaks but the best way, which is always 100%, is to change the local DNS on the device to DNS servers we provide. Right now, our developers are working on a new feature that will protect from DNS leaks and a new version of the kill switch. Protection against IPv6 leaks will also be implemented in new VPN application.

10. Yes, we’re offering our own customized VPN application for Windows, iOS (iPhone/iPad), Android and MacOS(OS X) with features that help to protect our customers.

11. We have physical control over our servers and network in Sweden. We’re only using trusted data centers with strong security. Our providers have no access to PrivateVPN’s servers and most importantly, there are no customer data/activities stored on the VPN servers or on any other system we have.

12. See here and here.

PrivateVPN website

CryptoStorm

cryptostorm
1. Nope, no logs. We use OpenVPN with logs set to /dev/null, and we’ve even gone the extra mile by preventing client IPs from appearing in the temporary “status” logs using our patch available at https://cryptostorm.is/noip.diff.

2. We’re a decentralized project, with intentional separation of loosely-integrated project components. We own no intellectual property, patents, trademarks, or other such things that would require a corporate entity in which ownership could be enforced by the implied threat of State-backed violence; all our code is published and licensed opensource.

3. No, we don’t use any external visitor tracking or email providers.

4. Our choice is to reply to any such messages that are not obviously generated by automated (and quite likely illegal) spambots. In our replies, we ask for sufficient forensic data to ascertain whether the allegation has enough merit to warrant any further consideration. We have yet to receive such forensic data in response to such queries, despite many hundreds of such replies over the years.

5. See above. We have never received any valid court orders requesting the identity of a user, but if we ever did receive such a request, it would be impossible for us to comply as we keep no such information.

6. Yes, all traffic is allowed.

7. We accept PayPal and payments using Stripe (includes Bitcoin), although we will manually process any other altcoin if a customer wishes. We don’t have financial information connected in any way to the real-life identity of our network members; our token-based authentication system removes this systemic connection, and thus obviates any temptation to “squeeze” us for private data about network membership.

We quite simply know nothing about anyone using our network, save for the fact that they have a non-expired (SHA512 hash of a) token when they connect. Also, we now process Stripe orders instantly in-browser.

8. We only support one cipher suite on-net. Offering “musical chairs” style cipher suite roulette is bad opsec, bad cryptography, and bad administrative practice. There is no need to support deprecated, weak, or known-broken suites in these network security models; unlike browser-based https/tls, there are no legacy client-side software suites that must be supported. As such, any excuse for deploying weak cipher suites is untenable.

Everyone on Cryptostorm receives equal and full security attention, including those using our free/capped service “Cryptofree.”

There are no “kill switch” tools available today that actually work. We have tested them, and until we have developed tools that pass intensive forensic scrutiny at the NIC level, we will not claim to have such. Several in-house projects are in the works, but none are ready yet for public testing.

We take standard steps to encourage client-side computing environments to route DNS queries through our sessions when connected. However, we cannot control things such as router-based DNS queries, Teredo-based queries that slip out via IPv6, or unscrupulous application-layer queries to DNS resolvers that, while sent in-tunnel, nevertheless may be using arbitrary resolver addressing. Our Windows client attempts to prevent some of this, but it’s currently impossible to do so completely.

We are saddened to see others who claim they have such “magical” tools; getting a “pass” from a handful of “DNS leak” websites is not the same as protecting all DNS query traffic. Those who fail to understand that are in need of remedial work on network architecture.

As we run our own mesh-based system of DNS resolvers, “deepDNS”, we have full and arbitrary control over all levels of DNS resolution presentation to third parties.

9. We only handle IPv4 connections, we are currently looking into IPv6, but that’s work in progress. Our widget prevents against IPv6 leaks, and we advise our customers on how to prevent leaks on other platforms.

10. We offer an open source application written in Perl (dubbed the “CS widget”), source code available at GitHub. Currently only for Windows, but we are working on porting it to Linux. The application is essentially an OpenVPN GUI with some tweaks here and there to prevent different types of leaks (DNS, IPv6, etc.) and to make connecting as easy as possible. Output from the back end OpenVPN process is shown in the GUI. When you exit the program, that data is forgotten.

11. We deploy nodes in commodity data centers that are themselves stripped of all customer data and thus disposable in the face of any potential attacks that may compromise integrity. We have in the past taken down such nodes based on an alert from onboard systems and offsite, independently maintained remote logs that confirmed a violation was taking place.

It is important to note that such events do not explicitly require us to have physical control of the machine in question: we push nameserver updates, via our HAF (Hostname Assignment Framework) out via redundant, parallel channels to all connected members and by doing so we can take down any node on the network within less than 10 minutes of initial commit.

We have constructed a mesh-topology system of redundant, self-administered secure DNS resolvers which has been collected under the label of “deepDNS”. deepDNS is a full in-house mechanism that prevents any DNS related metadata from being tied to any particular customer. It also allows us to provide other useful features such as transparent .onion, .i2p, .p2p, etc. access. There is also DNSCrypt support on all deepDNS servers to help protect pre-connect DNS queries.

12. Our server list is available here.

CryptoStorm website

BolehVPN

bolehvpn1. We do not keep any logs on our VPN servers that would allow us to do this.

2. BV Internet Services Limited, Seychelles

3. We use Zendesk to deal with support queries and do track referrals from affiliates. We however provide the option to send us PGP encrypted messages via e-mail and also Zendesk. We also do not use Cloudflare. We also have an opt-in only education/blog list that uses Hubspot. For announcements we use our own e-mail system.

4. We generally find providers that are friendly towards such DMCA notices or where it cannot be avoided, we just keep them as Surfing/Streaming servers with P2P disabled. These servers are more for geo-location or general purpose surfing rather than P2P. We at no times give out customer information to handle this.

5. There has been a German police request for certain information in relation to a blackmail incident. Despite it appearing legitimate, we could not assist as we did not have any user logs. We maintain a warrant canary at https://www.bolehvpn.net/canary.txt which we do update once a month or when there is a request for information (even if we have not complied with it).

6. Most servers support P2P except those marked as Surfing-Streaming which are with providers with strict DMCA requirements. All other servers support P2P and are not treated differently from any other traffic.

7. Paypal, Paymentwall, Coinpayments, Paydollar, MolPay and we also accept direct Bitcoin/Dash payments.

8. We recommend OpenVPN and our cloak servers that use AES-256 bit encryption and a XOR patch that obfuscates your traffic as being VPN traffic.

9. We provide IPv6 leakage protection.

10. We have a custom application for Windows and Mac and also a slightly modified version for Android.

11. They are bare metal boxes hosted in various providers. We do use our own DNS servers.

12. Canada, France, Germany, Italy, Japan, Luxembourg, Malaysia, Netherlands, Singapore, Sweden, Switzerland, United Kingdom and USA.

BolehVPN website

AzireVPN

azire1. No, we don’t.

2. The registered name is Netbouncer AB, operating under Swedish jurisdiction.

3. No, we refuse to use 3rd party software. E-mail, ticket system and other
services are hosted in-house on open-source software.

4. We politely inform the sender party that we cannot help them since it is not possible for us to identify the user.

5. This has not happened yet, but in the case a valid court order would be issued, we will inform the other party that is it not possible to identify an active user of our service.

6. Yes, all protocols are allowed.

7. We accept payments through Bitcoin (Bitpay), Paypal, Credit Cards and Swish.

8. We recommend our users to use our default configuration we supply with
OpenVPN 2.4:
– AES-256-GCM data-channel
– TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384 TLS
– HMAC-SHA512 authentication
– 4096 bit key size using a Diffie-Hellman key exchange
– 2048 bit TLS additional auth key
– 2048 bit TLS additional crypt key

9. We assign IPv6 addresses on all our locations, overriding the local IPv6 assigned to the client. Currently we provide guides to prevent DNS leaks and also for kill switches on some operating systems. Our new client will soon get integrated kill switch, and DNS leak prevention is already in place for some platforms.

10. Yes, we do offer a custom VPN application for all desktop platforms (Windows, MacOS and Linux), with source-code released on Github.

11. Yes, we own our hardware, co-located in dedicated racks on different data centers around the globe. We do host our own DNS servers. One thing that is very important for us is the hardware installation on new locations – we always bring the hardware there on our own, to make sure that it is being installed as per our own guidelines and no kind of foul play by another party can take place. Next step is the start video documenting the process for each new location for full transparency.

12. As of now; Sweden, US, United Kingdom and Spain. With Canada, Moldova and more US locations during 2017’s roadmap.

AzireVPN website[/expand]

VPNBaron

vpnbaron1. No, we don’t! No traffic logs are recorded. We monitor only the number of simultaneous user connections on our network as a whole, and do not link the user to a particular server. This helps us avoid infinite simultaneous connections from a single user.

2. Our registered legal name is Hexville SRL. We’re under Romanian jurisdiction, inside of the European Union. EU takes privacy issues more seriously than the US, as many already know.

3. For our sales site analytics, we rely on Google Analytics. Other than that, all our systems and support tools belong to us and are hosted in-house.

4. None of our users ever received a DMCA notice while connected to our service, being unable to detect the source user, due to our no traffic logging policy. On our end, we have an internal procedure of dealing with the DMCA claim that goes unnoticed by our users and of the users’ privacy is not affected.

5. No subpoena has been received by our company. If it will happen, we’ll be sure to assist as much as we’re legally obliged. Keep in mind that we don’t have any information stored about our users, except their login credentials.

6. Yes, it is allowed. We don’t restrict traffic in any way. Net neutrality is king.

7. We use Bitcoins (and many other kinds of virtual currencies), PayPal and Credit Cards. The lack of traffic logs does not allow any linkage between the individual accounts.

8. We take security very seriously at VPNBaron. We use only OpenVPN protocol, one of the most secure and hard to crack protocols, with AES-256-CBC cipher, TLSv1/SSLv3 DHE-RSA-AES512-SHA, 2048 bit RSA.

On top of the OpenVPN, you can also choose one of the two anti DPI (Deep Package Inspection) protocols: “TOR’s OBFSPROXY Scamblesuit” and “SSL” that mask your VPN connection from your ISP. These protocols come handy in places that actively block VPN connections, like China, Egypt or university campuses.

9. There is no difference in user experience regardless of the users’ IP type. Using the OpenVPN protocol, we do not have IPv6 leaks as these issues have been addressed in the latest OpenVPN versions. The same goes for DNS Leaks. OpenVPN has added a setting that deals with DNS leaks.

On top of that, we also provide another DNS Leak protection system that we developed before the protocol was updated and a killswitch feature that disables the network card if there is any risk of the users’ privacy being breached, temporarily disconnected the device from the internet. These settings can be activated or deactivated as the user wishes.

10. We offer a light and easy to use app for windows. For the other platforms we offer automation scripts and visual guides that get the user up and running in no time, regardless of the users’ tech savviness.

11. Our VPN servers have minimal data and do not store any private information. We do not have physical control of the servers, but we have unlimited access. This allows us to offer locations from all over the world.

12. We offer more than 30 servers in 18 countries and we’re expanding fast. You can find the full list here.

VPNBaron website

AceVPN

ace1. We do not log, period. We respect our users’ privacy. IPs are shared amongst users and our configuration makes it extremely difficult to single out any user.

2. We are registered in the USA and operate as Acevpn.com and the name of the company is Securenet.

3. We use Google Analytics on www.acevpn.com (marketing site). For emails, we use Google cloud and these are regularly purged.

4. We block the port mentioned in the complaint. IPs are shared by other users and our configuration makes it extremely difficult to single out any user. We do not share or sell any information to 3rd parties.

5. To date, we have not received a court order or subpoena. Our users cannot be identified based on IP address.

6. We have special servers for P2P and are in data centers that allow such traffic. These servers also have additional security to protect privacy when P2P programs are running.

7. We accept Bitcoin, PayPal, and Credit cards for payments. We store billing information on a secure server separate from VPN servers.

8. For higher security needs we suggest using our IPSEC IKEv2 VPN or our OpenVPN with Elliptic Curve Encryption which we are rolling out as we speak. Both these protocols use next gen cryptographic algorithms and AES 256 bit data encryption suitable for top secret communication. Read about our IKEv2 implementation.

9. We provide kill switches if a connection drops. Our servers are tested for DNS leaks. Our service is currently IPv4 only, so no ipv6 leaks.

10. We use an unmodified OpenVPN client that is signed by the developers. Our users are encouraged to use a VPN client of their choice. We do not offer custom applications at this time.

11. We have full control over our servers. Servers are housed in reputed data centers. Many of them are ISO certified and are designed to the highest specifications for performance, reliability, and security. We operate our own DNS servers (Smart DNS) for streaming videos. For VPN, we use Google and Level3 DNS.

12. We have servers in 26+ countries and over 50+ locations.

AceVPN website

OctaneVPN

octane1. No. Our gateway servers operate out of an encrypted RAM Disk volume that loads remotely on boot. When they are powered down, the RAM Disk is lost.

2. We operate as two separate companies. Octane Networks is a US registered company and handles customer-facing communications. The infrastructure company is a Nevis-based company and manages all the network infrastructure.

3. We use Google Analytics for general website trends. We use Hotjar occasionally for A/B and user experience testing. Support is internal.

4. If we receive a DMCA notice or its equivalent based on activity that occurred in the past, we respond that we do not host any content and have no logs. If we receive a realtime DMCA notice where the customer VPN session is still active when the DMCA notice is processed, we notify the customer if we have sufficient information to do so. No customer data is used to respond to DMCA notices.

5. This has not happened. Our customers’ privacy is a top priority for us. We would handle a court order with complete transparency.

A court order would likely be based on an issue traced to a gateway server IP address and would, therefore, be received by our network operations company which is Nevis based. The validity of court orders from other countries would be difficult to enforce. The network company has no customer data and no log data, so if it were compelled to respond to a court order, our response would likely lack the type of information requested.

Our marketing company is US-based and would respond to an order issued by a court of competent jurisdiction. The marketing company does not have access to any data related to network operations or user activity, so there is not much information that a court order could reveal.

6. P2P is allowed. We operate with net neutrality with the exception of restricting outgoing SMTP to prevent spammers from abusing the service.

7. Bitcoin, Credit/Debit Card, and PayPal. If complete payment anonymity is desired, we suggest using Bitcoin or a gift/disposable credit card. Methods such as PayPal or Credit/Debit card are connected to an account token so that future renewal payments can be properly processed and credited. We allow customers to edit their account information. With our US/Nevis operating structure, customer payment systems information is separate from network operations.

8. We recommend using the AES-256-CBC cipher with OpenVPN, which is used with our client. IPSec is available for native Apple device support and PPTP is offered for other legacy devices, but OpenVPN offers the best security and speed and is our recommended protocol

9. Our client disables IPv6 completely as part of our DNS and IP leak protection in our Windows and Mac OS X OctaneVPN clients. Our OpenVPN based client’s IP leak protection works by removing all routes except the VPN route from the device when the client has an active VPN connection. This a better option than a ‘kill switch’ because our client ensures the VPN is active before it allows any data to leave the device, whereas a ‘kill switch’ typically monitors the connection periodically, and, if it detects a drop in the VPN connection, reacts.

With a ‘kill switch’, data sent during the time between checks is potentially vulnerable to a dropped connection. Our system is proactive vs a reactive kill switch.

Customers should be vigilant as other software such as JavaScript, Flash, Java and WebRTC can leak IP independently of their VPN connection. Customers might want to consider creating a profile in their web browser specifically tailored toward web browsing privacy by disabling 3rd party plugins/extensions.

10. Yes, for Windows and Mac OS X. We support a number of protocols and software configurations.

11. In our more active gateway locations, we colocate. In locations with lower utilization, we normally host. All of our network infrastructure is set up so that each gateway boots, creates an encrypted RAM Disk, and authenticates with a central server before loading its configuration over our encrypted network remotely. The individual gateways only have a boot script – no data or config information is on the individual gateways. If we lost access to a gateway due to a third party action, the encrypted RAM Disk would vanish upon powering down.

12. We have gateways in 45 countries and 92 cities.

OctaneVPN website

proXPN

1. We do not log any information about IP usage, in fact in most locations we NAT everything so it is not even possible to be able identify a requester and a user account and a source IP.

2. proXPN B.V. out of the Netherlands

3. We utilize a tracking cookie for affiliate sales that expires in 30 days. We use SendGrid for email which sends out the welcome and support emails, but email information is never shared with another 3rd party.

4. We respond and have internal processes that deal with these requests that do not include or disclose any customer information.

5. We keep no record of how users are mapped to IPs so we have nothing to give.

6. We don’t block, filter or throttle any BitTorrent or file-sharing traffic.

7. Visa, MasterCard, American Express, PayPal and Bitcoin.

8. We currently support IP SEC, OpenVPN and PPTP. IP SEC tends to be the fastest and most reliable, however as it is UDP some locations may restrict access. Open VPN over TCP is also reliable but slower than Open VPN over UDP.

9. We provide DNS leak protection. “VPN Guard” is a kill switch on our desktop application.

10. Yes, we have clients for Windows XP and up, MacOS 10.6 and up, Linux Debian but also works on other flavors such as Ubuntu, and on mobile we support iOS and Android.

11. We run and maintain our own core servers and we also serve our own DNS.

12. USA, Canada, Costa Rica, UK, Iceland, Netherlands, Germany, Switzerland, Sweden, China, Romania, Singapore, Australia, France and Japan.

proXPN website

Hide.me

hideme1. No, we don’t keep any logs. We have developed our system with an eye on our customers’ privacy, so we created a distributed VPN cluster with independent public nodes that do not store any customer data or logs at all. We have also been audited by one of the finest independent security experts, Leon Juranic, who has certified us to be completely log free.

2. Hide.me VPN is operated by eVenture Limited and based in Malaysia with no legal obligation to store any user logs at all.

3. Our landing pages, which are solely used for advertising purposes, include a limited amount of third-party tracking scripts, namely Google Analytics. However, no personal information that could be linked with VPN usage is shared with these providers. We do not send information that could compromise someone’s security over email.

4. Since we don’t store any logs and/or host copyright infringing material on our services, we’ll reply to these notices accordingly.

5. It has never happened but in such a scenario, we won’t be able to entertain the court orders because our infrastructure is built in a way that it does not store any logs and there is no way we could link any particular cyber activity to any particular user. In case we are forced to store user logs, we would prefer to close down rather than putting our users at risk who have put their trust in us.

6. There is no effective way of blocking file-sharing traffic without monitoring our customers which is against our principles and would be even illegal. Usually we only recommend our customers to avoid the US & UK locations for file-sharing but it is on a self-regulatory basis since these countries have strong anti-copyright laws in place.

7. We support over 200+ international payment methods, including Bitcoin, Paypal, Credit Cards, Bank transfer and UKash. All payments are handled by external payment providers and are linked to a temporary payment ID. This temporary payment ID can not be connected to the user’s VPN account/activity. After the payment is completed, the temporary payment ID will be permanently removed from the database.

8. After all, modern VPN protocols that we all support – like IKEv2, OpenVPN and SSTP, are considered secure even after the NSA leaks. We follow cryptographic standards and configured our VPN servers accordingly in order to support a secure key exchange with 8192-bit keys and a strong symmetric encryption (AES-256) for the data transfer.

9. Our users’ privacy is of utmost concern to us. Our Windows client has the features such as kill switch, Auto Connect, Auto Reconnect etc which makes sure that the user is always encrypted and anonymous. Even if one of our customers decides not to use the client, in our community there is a big variety of tutorials to help our customers to protect themselves against any sort of leaks.

Above all, we have put in some additional layers of security which include default protection against IP and DNS leaks. To ensure IP leak protection, as soon as the VPN connection is established, our application deletes the default gateway of the user’s’ Internet Connection so their local network becomes inaccessible. In such an instance you enforce the VPN usage adding another layer of security making IP leaks impossible and that’s not it.

Our Windows app also blocks outgoing IPv6 connections automatically to prevent IP leaks. It won’t affect a user’s overall Internet connectivity if your ISP assigns you an IPv6 address.

10. We have our own VPN application for Windows, Mac, Android and iOS

11. We operate our own non-logging DNS-servers to protect our customers from DNS hijacking and similar attacks. We operate 32 server locations in 27 different countries. However we do not own physical hardware, there is an intrusion detection and other various security measures in place to ensure the integrity and security of all our single servers.

We choose all third party hosting providers very carefully, so we can assure that there are certain security standards in place (ISO 27001) and no unauthorized person could access our servers. Among our reputable partners are Leaseweb, NFOrce, Equinix and Softlayer.

12. Our servers are located in countries all over the world, among the most popular ones are Canada, Netherlands, Singapore, Germany, Brazil, Mexico and Australia. Below is the complete list of countries, alternatively you can view all available locations here.

Hide.me website

AirVPN

airvpn1. No, we don’t.

2. It is “AIR” and it is registered in Italy.

3. No, absolutely not.

4. They are ignored.

5. No court order or magistrate decree has ever been issued to disclose the identity of any of our customers, but we will of course do our best to comply with a valid and jurisdictionally competent magistrate decree or order. However, it must also be said that we can not provide information that we don’t have. Up to now, no personal information has ever been given away, while data about traffic is not even kept by us (we do not inspect, log or monitor traffic in any case).

6. Yes, it’s allowed on every and each server. We do not discriminate against any protocol. Our infrastructure is totally agnostic and we do not even monitor traffic to see which protocols are being used.

7. We accept Bitcoin, a wide range of cryptocurrencies, PayPal and major credit cards. About PayPal and credit cards, the usual information pertaining to the transaction and account/credit card holder are retained by the financial institutions, and it is possible to correlate a payment to a user (which is good for refund purposes when required).

When this is unacceptable for security reasons, then Bitcoin or some other cryptocoin should be used. Bitcoin is not anonymous by itself, but it can be provided with a rather good anonymity layer simply by running the Bitcoin client behind Tor. On top of that we also accept some cryptocurrency which offers intrinsically and by default a strong anonymity layer protecting the transactions.

8. We would recommend our setup which includes Perfect Forward Secrecy, 4096 bit RSA keys, 4096 bit Diffie-Hellman keys and authentication on both sides not based on username/password. In general, we would also recommend to be cautious and get well documented before jumping to ECC. Our service setup, based on OpenVPN, is the following:

DATA CHANNEL CIPHERS
AES-256-CBC with HMAC-SHA1 for authentication

CONTROL CHANNEL CIPHERS
AES-256-GCM with HMAC-SHA384 for authentication
AES-256-CBC with HMAC-SHA1 for authentication

4096 bit Diffie-Hellman keys size
TLS Ciphers (IANA names): TLS-DHE-RSA-WITH-AES-256-CBC-SHA, TLS-DHE-RSA-WITH-AES-256-GCM-SHA384
TLS additional authorization layer key: 2048 bit
Perfect Forward Secrecy through Diffie-Hellman key exchange DHE. After the initial key negotiation, re-keying is performed every 60 minutes (this value can be lowered unilaterally by the client)

9 and 10) Our free and open source software “Eddie” (released under GPLv3) for GNU/Linux, Windows, OS X and MacOS, implements features which prevent the typical DNS leaks in Windows and any other leak (for example in case of unexpected VPN disconnection). Leaks prevention, called “Network Lock”, is not a trivial kill-switch, but it prevents various leaks that a classical kill switch can’t block: leaks caused by WebRTC, by programs binding to all interfaces on a misconfigured system and by malevolent software which tries to determine the “real” IP address.

We block outbound IPv6 packets at the moment on client side (a solution preferred over disabling IPv6, which remains anyway an optional feature). In 2016 we planned IPv6 support for half or late 2017 and at the moment we are fine with this deadline.

We provide guides, based on firewalls and not, to prevent leaks on various systems for all those persons who can’t or don’t wish to use our client software Eddie.

11. Our servers are housed in data centers which we have physical access to, provided that the access is arranged well in advance for security reasons. Access to servers is also guaranteed to the data center technicians, for any need of on-site support.

12. We have servers located in several countries. We offer a public real-time servers monitor in one of our web pages which provides a lot of information (including location, of course) for each server.

AirVPN website

HideIPVPN

hideipvpn1. We store no logs related to any IP address. There is no way for any third-party to match user IP to any specific activity in the internet.

2. Registered name of the company is Server Management LLC and we operate under US jurisdiction.

3. We use live chat provided by WHMCS and Google Apps for incoming email. For outgoing email we use our own SMTP server.

4. Since no information is stored on any of our servers there is nothing that we can take down. We reply to the data center or copyright holder that we do not log our users’ traffic and we use shared IP-addresses, which make it impossible to track who downloaded/uploaded any data from the internet using our VPN.

5. HideIPVPN may disclose information, including but not limited to, information concerning a client, a transmission made using our network, or a website, in order to comply with a court order, subpoena, summons, discovery request, warrant, statute, regulation, or governmental request. But due to the fact that we have a no-logs policy and we use Shared IPs there won’t be anything to disclose. This has never happened so far.

6. This type of traffic is welcomed on our German (DE VPN) and Dutch (NL VPN) servers. It is not allowed on US, UK, Canada, Poland and French servers as stated in our TOS – the reason for this is our agreements with data centers. We also have a specific VPN plan for torrents.

7. HideIPVPN accepts following methods: PayPal, Bitcoin, Credit & Debit cards.

8. SoftEther VPN protocol. Users can currently use our VPN applications on Windows and OSX systems. Both versions have a “kill switch” feature in case connection drops. Our apps can re-establish VPN connection and once active restart closed applications. Also, the app has the option to enable DNS leak protection.

9. Our VPN servers have been checked against those issues. Users were warned about DNS leak danger. Our free VPN app has both kill switch and DNS protection leak options for our clients to use. When customers are using our software we disable IPv6 through the VPN connection. For now we are working to re-route IPv6 traffic via VPN over OpenVPN and SoftEther.

10. Yes, we recommend to our customers to use our free VPN application for an easier, faster and better connection. It works with Windows, MacOS, iOS and Android.

11. We don’t have physical control of our VPN servers. Servers are outsourced in premium data centers with high-quality tier1 networks. For our VPN we use Google DNS servers and for Smart DNS we use our own DNS servers.

12. At the moment we have 29 servers located in 7 countries – US, UK, Netherlands, Germany, Canada, Poland and France.

HideIPVPN website

VPN Land

1. We can’t match a VPN IP-address and timestamp to a specific user or his or her IP-address. We do not watch what our users do online, neither record their Internet activities. For billing purposes, order IP addresses of users are stored permanently.

2. VPNLand Inc. Registered in Toronto, Canada.

3. Yes we use several third party vendors, such as Zopim for online chat messages and Picreel for promotional purposes.

4. In most cases DMCA emails are ignored. We had to block non-common ports on our USA servers, so no DMCA emails coming against our US VPN Servers.

5. It hasn’t happened. In case if it happens we’ll definitely analyze it and take proper actions.

6. P2P traffic is blocked on our USA servers, other jurisdictions are OK for p2p. All traffic is treated equally.

7. PayPal, Credit cards, webmoney, Paymentwall, Bluesnap. We use WHMCS for billing/support (see also question 1).

8. OpenVPN – 256bit.

9. IPv6 traffic is filtered. Unfortunately no “kill-switch” feature at the moment.

10. Yes. Windows. Mobile app is currently under development.

11. We own some of the servers in Toronto. Other countries are rented servers.

12. See here.

VPN Land website

VPN.ht

1. We do not keep such data (logs).

2. VPN.ht Limited is incorporated in Hong Kong

3. Google Analytics. We are switching this to PIWIK

4. We do not handle DMCA notices, our data center partners do, and in all cases we do not keep logs so we cannot identify the customer.

5. We will stop updating our Warrant Canary.

6. All Protocol are allowed in all our locations.

7. We accept various payment methods: Credit card / PayPal / Bitcoin / Other national Payments. All are linked by an email.

8. For general use 128bit AES, but we do offer 256bit AES as Maximum encryption level.

9. We are currently deploying our Ipv6 network across our servers. We provide all our VPN users with a private log-less DNS server. Our application also offers various features such as a Kill switch.

10. Our application is open-source and can be found on github.com/vpnht. Currently, we are offering custom applications for Windows / Mac / Android / iOS.

11. We don’t, but we do have a strong relationship with our partners who operate data centers.

12. We have 127 servers in around 33 countries and we try our best to expand to locations most requested by our customers.

VPN.ht website

OVPN

ovpn1. Our entire infrastructure and VPN service is built to ensure that no logs can be stored – anywhere. Our servers are locked in cabinets and operate without any hard drives. We use a tailored version of Debian, which doesn’t support SATA controllers, USB ports etc. To further increase security, we use TRESOR and grsecurity to be resistant to cold boot attacks.

2. OVPN Integritet AB (Org no. 556999-4469). We operate under Swedish jurisdiction.

3. For website insights, we use Piwik, an Open Source solution that we host ourselves. The last two bytes of visitors’ IP addresses are anonymized; hence no individual users can be identified. For support, we use an internally built system.

The mail server is hosted by Glesys, a trusted provider in Sweden. Automatic emails from the website are sent using Mailgun, but we never send any sensitive information via email. Zopim is used for live chat, which we will eventually migrate from when we’ve built a satisfactory in-house solution.

4. Since we don’t store any content, such requests aren’t applicable to us.

5. No court orders have ever been received. However, the police have contacted us numerous times regarding whom had a specific IP address at a particular time. Due to the reasons mentioned in #1, we can’t provide them with any answers.

We published an open letter to the Swedish police to disclose that we are unable to provide any user information they request. We also have an insurance that covers trial expenses enabling us to take any requests to court in case an agency doubts our truthfulness.

6. Yes.

7. Bitcoin, Braintree for credit cards, PayPal & cash payments via postal mail. There’s a connection between payments and accounts, which is required in order to know who bought what. We recommend all users to pay anonymously.

8. We only provide OpenVPN and utilize AES-256-CBC, and a 2048-bit Diffie–Hellman key along with a 1024-bit TLS key to ensure that the key exchange can be done safely during the authentication phase.

9. We tunnel both IPv4 and IPv6 and therefore no leaks should happen. Our custom client has DNS leak protection as well as a killswitch to ensure our users safety.

10. Yes, we offer a custom VPN client for Windows, OS X and Ubuntu. We’ve also developed and manufactured a router with extensive functionality and security precautions, named OVPNbox.

11. Yes – we own all the servers used to operate OVPN. Our servers are locked in separate cabinets in each data center. However, using physical force one could break open the cabinets and therefore get physical access to our servers. To mitigate these extreme scenarios we have focused immensely on the physical security of our servers. Someone can literally be standing right next to our servers and will still fail to extract any data.

More information on OVPN’s physical security is available here.

12. Canada, Germany, the Netherlands and Sweden.

OVPN website

Perfect Privacy

pplogovpn1. We do not log or store any traffic, IP addresses, or any other kind of data that would allow identification of our users or their activities. The anonymity and privacy of our users are our highest priority, and the Perfect Privacy infrastructure was built with this in mind.

2. Perfect Privacy is registered in Zug, Switzerland.

3. All email and support tools are developed and hosted in-house under our control. We do use Google Analytics for website optimization and better market reach, but with the anonymizeIp parameter set. However, Perfect Privacy users are exempted from any tracking by Google Analytics and are also able to use our TrackStop filter which will block any tracking (as well as ads and known malware domains) directly on our servers.

4. Because we do not host any data, DMCA notices do not directly affect us. However, we do receive copyright violation notices for filesharing in which case we truthfully reply that we have no data that would allow us to identify the party responsibly.

5. The only step on our side is to inform the contacting party that we do not have any data that would allow the identification of a user. There had been incidents in the past where Perfect Privacy servers have been seized but never was any user information compromised that way. Since no logs are stored in the first place and additionally all our services are running within ramdisks, a server seizure will never compromise our customers.

6. Yes, Bittorrent and other file-sharing is generally allowed and treated equally to other traffic. However, at certain locations that are known to treat copyright violations rather harshly (very quick termination of servers), we block the most popular torrent trackers to reduce the impact of this problem. Currently this is the case for servers located in the United States and France.

7. We offer a variety of payment options ranging from anonymous methods such as sending cash, or Bitcoin. However, we also offer payment with PayPal and credit cards for users who prefer these options. We keep no data about the payment except for when the payment was received which is linked only to an anonymous account number.

8. While we offer a range of connection possibilities we would recommend using OpenVPN with 256 bit AES encryption. Additional security can be established by using a cascaded connection: The Perfect Privacy VPN Manager allows to cascade your OpenVPN connection over up to four freely choosable servers.

9. Perfect Privacy provides full IPv6 support (meaning you will get an IPv6 address even if your ISP does not offer IPv6) and as such it is fully integrated in the firewall protection. The “Kill Switch” is activated by default and will prevent any IP and DNS leaks for both IPv4 and IPv6.

10. Yes, we offer custom clients for Windows, Linux, MacOS X and Android at the moment. At the time of this article, the Linux, Mac and Android clients are still in open beta. More functionality will be added to these clients in the near future.

11. Our VPN servers run in various data centers around the world. While we have no physical access to the servers, they all are running within RAM disks only and are fully encrypted.

12. We offer servers in 23 countries. For full details about all servers locations please check our server status site as we are constantly adding new servers.

Perfect Privacy website

VPN Unlimited

vpnunlim1. No, we do not keep logs that allow us to match IP or DNS addresses of online services our customers visit from their accounts.

2. KeepSolid Inc. We operate under the USA jurisdiction.

3. We use Zendesk for technical support purpose, Fabric to get statistics on crashes and fix bugs asap. We send emails to our customers via SparkPost. Emails are not linked to any personal information of our users and we take special care of security of our users email addresses too.

On our site we use Google Analytics to collect anonymous statistics on page views, clicks, etc. Our users’ personal information is not stored or disclosed to third parties.

4. As we do not log any of the customers’ information or session data, VPN Unlimited users are protected by legal definition. There is a US consumer protection law that can be used to protect our customers.

5. We have no information we could disclose as we don’t log addresses of sites our customers visit.

6. We allow legal peer-to-peer file sharing using servers in France, Luxembourg, Romania, Canada, and San Francisco. VPN Unlimited is not to be widely used for torrenting as its primary task is to protect users’ online privacy and anonymity.

7. We accept over 100 payment methods: from credit cards to PayPal to Bitcoin to payments through mobile operators. And, of course, users can use their Apple or Amazon ID account, from the purchasing tab inside the app to prolong their subscription to our service. All of the payment systems we use ensure 99.99% security.

8. We recommend using KeepSolid Wise that enables AES256 encryption with additional obfuscation for users from countries with heavily censored access to web resources (like China, the United Arab Emirates, Turkey, etc.). It’s also the most secure protocol for usage in a dangerous internet environment (e.g. in public WiFi networks).

9. We partially support IPv6. The service doesn’t have a “kill switch” feature yet, and DNS leak protection works on Windows. We are also working on an update to add a DNS firewall that will protect our users from tracking, malware and ads.

10. We offer store versions along with standalone apps for iOS, Android, Mac, Windows (desktop), Windows Phone, and Linux users. Last year we launched plugins for Google Chrome and Mozilla Firefox. We collaborate with FlashRouters so users can purchase a router, protected by our VPN. Also, users can get personal servers with dedicated IP addresses for their use.

11. We do not own data centers. We rent physical and virtual servers from well-trusted companies like LeaseWeb, OVH, RedStation, ServerCentral, IBM SoftLayer, etc. We have full control over DNS servers that are being used for work via VPN.

12. Servers are located in 51 countries. We regularly launch new servers in new regions. The whole list is available on our site. Users can contact us through this page and suggest a location for us to launch the next server in.

VPN Unlimited website

Ivacy

1. Ivacy believes in anonymity and therefore we do not maintain user activity logs.

We only keep track of login attempts because we allow 5 multiple/simultaneous connections with one VPN account. We come across encrypted credentials in this process. The process is fully automated and we keep this info till the user is connected. It is automatically deleted as soon as user disconnects from our server. Since we don’t come across any personal IP address in this process, we can’t map any connection to any IP address.

2. Ivacy is registered under PMG Private Limited. Our headquarter is based in Singapore; one of the few nations without mandatory data retention laws. Working out of the region allows us to further ensure the anonymity of our users- something we hold very dear. At the present moment in time, there seems to be no legal hindrance or government intervention that could harm our reasoning behind working out of Singapore.

3. We use Aweber for sending emails to our customers, and our live chat services are managed by Livechatinc’s platform.

4. We cannot relate any specific activity with any specific user, since we don’t keep any logs or records. Moreover, working from Singapore, one of the few nations without mandatory data retention laws, allows us to further ensure the anonymity of our users. We have not come across such an event, but if we do receive any legal notice, we cannot do anything more than to ignore it simply because they have no legal binding to us. Since we are based in Singapore, all legal notices have to be dealt with according to Singapore laws first.

5. Again, such a scenario has not presented itself yet. We do not log any traffic or session data so we cannot identify and connect a specific activity with a particular user of our service.

6. We are proud to mention here that we allow P2P traffic on many of our marked servers including servers in UK, USA and Canada.

7. We accept payment through major credit cards, BitCoin, PayPal, Webmoney and Perfect Money. Apart from the aforementioned payment methods, we also accept more than 120 region based payments through PaymentWall. When a customer places an order, we immediately send a payment confirmation email to let him know that he has placed an order successfully. Then our merchant takes over and verifies the information given by the customer and lets us know whether to deliver the order or not. This process normally takes typically from 5 – 60 minutes.

8. We offer and recommend 256 bit encryption in addition to SSL based protocols (i.e. SSTP and OpenVPN). We offer our own DNS servers, an “Internet Kill Switch” and Split Tunneling features.

9. We have IPv6 Leak Protection feature in our Apps. Customers can enable it via settings; and we highly recommend them to do so. We provide DNS leak protection by providing our own DNS servers. We also have Internet Kill Switch in our Windows and Android apps. Soon we are launching same for Mac and iOS devices.

10. Yes, we offer custom VPN applications to our users. These include VPN Apps for Windows, Android, Mac and iOS. We also have a dedicated VPN addon for Kodi running on OpenELEC based devices and Raspberry Pi.

11. We physically control some of our server locations where we have a heavier load. Other locations are hosted with third parties until we have enough traffic in that location to justify racking our own server setup. We host with multiple providers in each location. We have server locations in more than forty countries. In all cases, our network nodes load over our encrypted network. Anyone taking control of the server would have no usable data on the disk. Yes, we have our own DNS servers.

12. We have servers located in more than 40 countries. You can find the complete list of servers here.

Ivacy website

WhatTheServer

1. Our OpenVPN servers are configured with “verb 0” so that they keep no logs at all. Our SOCKS Proxy servers do keep authentication logs which include the IP address, but these logs are cleared every 6 hours. We have a session management system that tracks which users are logged into which servers, however that system operates on real-time data and does not log events.

2. What The * Services, LLC is incorporated in the USA.

3. We use Google Analytics on our website for visitors.

4. We respond saying that we are a VPS/VPN provider and that we do not have the logs requested nor any other logs about our customers usage of our service.

5. We have not yet received such a court order or subpoena for user information. However, if we do in the future, we will take several steps. First, we would consult with our lawyers to confirm the validity of the order/subpoena, and respond accordingly if it is NOT a valid order/subpoena. Then we would alert our user of the event if we are legally able to.

If the order/subpoena is valid, we would see if we have the ability to provide the information requested, and respond accordingly we do NOT have the information requested. If we DO have the information requested, we would immediately reconfigure our systems to stop keeping that information. Then we would consult with our lawyer to determine if there is anyway we can fight the order/subpoena and/or what is the minimum level of compliance we must meet.

6. BitTorrent and other file-sharing traffic is allowed on all VPN/Proxy servers which are NOT located in the USA.

7. Our payment options include PayPal, Bit-Pay (bitcoin), PerfectMoney, and Coinbase (bitcoin). When a user selects a payment method our system will remember that payment method and link it to their account. For this reason, we suggest that our users do not put in their real name & contact information, and that they should pay us anonymously via Bitcoin.

8. All of our OpenVPN and SOCKS Proxy servers are running OpenBSD and are using LibreSSL instead of OpenSSL. This protects our servers from a wide range of attacks on the encryption.

Our OpenVPN Servers use AES-256-CBC & SHA512 HMAC for the Data Channel, and DHE-RSA-AES256-GCM-SHA384 on the Control Channel. Our OpenVPN Servers are also configured with 4096bit RSA keys and a custom 4096bit Diffie-Hellman parameters. Our SOCKS Proxy is based on OpenSSH, so they support any ciphers the client wants to use. With OpenSSH, the Client decides what cipher to use instead of the Server.

We push routes to our OpenVPN Clients which instruct them to route all IP traffic which is not destined for their local network to be routed through the VPN. This includes DNS traffic. We push OpenVPN Client configuration files which include “resolv-retry infinite” and “perstist-tun”, which when combined should prevent the Client from sending traffic in-the-clear unless the user manually kills the OpenVPN connection.

Furthermore, all of our OpenVPN and SOCKS Proxy servers are full IPv4/IPv6 Dual-Stack and we push a default route for both IPv4 and IPv6 to our clients. This is critical because if your home ISP gives you an IPv6 address, your computer will use IPv6 instead of IPv4. You will leak a significant amount of traffic if we did not push you a default route for IPv6.

9. We do not offer DNS leak protection via kill switches.

10. We do not offer a custom VPN application. Instead, we instruct our users to install an OpenVPN client of their choice from a trusted source i.e. openvpn.net.

11. All of our infrastructure is hosted in 3rd party colocations. However, we use full-disk-encryption on all of our servers.

12. We have servers in the USA, Germany, Netherlands, and Sweden.

WhatTheServer website

HeadVPN

1. We DO NOT keep any logs. We do not store logs relating to traffic, session, DNS or metadata.

2. We’re registered in the United Kingdom under the name “HEADVPN LTD”

3. We are using Live chat provided by Tawk.to and Google Apps for incoming email. We use Google Analytics and a WHMCS ticket tool.

4. Since we don’t keep any information on any of our servers there is nothing that we can take down. If we receive a valid DMCA notice we can only take action if the connection is still active (we notify the user and stop the session).

5. We haven’t received any court orders. If that happens, the agency will be informed that no user information is available as we DO NOT keep logs.

6. For P2P/Bittorent traffic we have special VPN servers (which are located in a data center that allows such traffic). On other VPN servers, P2P/Bittorent traffic is blocked.

7. We accept all forms of Credit/Debit cards payments through the Stripe payment gateway and PayPal payment method. We do not store any billing information such as credit cards or addresses.

8. We provide all kinds of encryption methods, including PPTP, L2TP/IPsec, SSTP, OpenVPN and SoftEther protocols. We recommend using OpenVPN protocol as it’s the most secure and using RSA 4096 bit and AES 256 bit encryption keys.

9. DNS leak protection is best handled by using OpenVPN protocol (AES-256-CBC algorithm for encryption).

10. For the time being we do not provide a custom tool (in progress).

11. All our VPN servers are hosted in 3rd party data centers with the highest specifications for performance, reliability and security. We have direct access to each server and they all are running within RAM disks (which are fully encrypted).

12. Our VPN servers are located in the United Kingdom, United States, Germany and Netherlands.

HeadVPN website

PureVPN

1. PureVPN believes in anonymity and therefore we do not maintain user activity logs. To better our services and enhance usability of our software, we only monitor access attempts to our server and this is done only for security and troubleshooting purposes.

2. The registered company name is GZ Systems Ltd. We are headquartered in Hong Kong; one of the few nations without mandatory data retention laws.

3. PureVPN does not store any personally identifiable information. 3rd party tools, such as Google analytics, are only used for the purpose of marketing and for improving customer experience.

4. We take DMCA notices quite seriously and encourage our users to comply with necessary guidelines to avoid such notifications. Actions taken cannot be broadly stated. They are dealt with on a case by case basis.

5. Such a scenario has not occurred yet. If it does occur, we will act in the best interest of the user and the law.

6. File-sharing is allowed on some servers. We uphold regional copyright laws and closely monitor changing policies on the matter and is thus subject to change. You can always refer to our customer support for details on which servers allow file-sharing.

7. We accept payment through major credit cards, BitCoin, PayPal, AliPay, Webmoney, Yandex, Ukash, CashU, Giropay, Necard, Mercado Pago, MyCard Wallet and more.

When you place an order, we immediately send a payment confirmation email to let you know that you have placed an order successfully. Then our merchant takes over and verifies the information given by you and lets us know whether to deliver the order or not. This process normally takes typically from 5 – 60 minutes or so. We do not come across anyone’s IP address in the process.

8. We offer and recommend 256 bit encryption in addition to SSL-based protocols (i.e. SSTP and OpenVPN). We offer our own DNS servers, an “Internet Kill Switch” and Split Tunneling features. As far as incoming traffic is concerned, we offer NAT Firewall, Web Protection and a Stealth VPN feature allowing you to browse websites via virtual browsers eliminating cookie usage.

9. We provide IPv6 leak protection including DNS leak protection and internet kill switch as standard features to our clients.

10. We offer custom VPN applications for multiple platforms including but not limited to Windows, Mac , IOS , Android. PureVPN is also compatible with routers, gaming consoles, BoxeeBox, Roku, Apple TV, Android TV and 20+ other OS and devices.

11. PureVPN provides one of the largest networks. We have servers in 100+ countries. An infrastructure of such magnitude would be rather difficult to maintain and thus we have agreements with Data centers throughout the world. These Data centers are bound by contract and thus cannot interfere with our data without our instruction.

12. We have more than 750+ servers. They are based in 5 continents which makes us spread over the world.

PureVPN website

Proxy.sh

proxy1. We do not keep any logs whatsoever. We even have an anonymous token-based authenticating system.

2. We are a not-for-profit unit part of offshore-based digital incubator Three Monkeys International Inc. It operates from the Republic of Seychelles.

3. We use Google Translate & Google Maps across some of our web app’s elements for UX gains. These can be turned off with a JavaScript blocker. We also use Mandrill for a reliable email delivery, but users may still subscribe to our services with a non-working or disposable e-mail address.

Everything else, from support to billing, is organised in-house. We do not use any CRM, and we do not have any advertising or marketing channel. We only rely on word-of-mouth.

4. We immediately block the affected port on the related node, and then we publish the notice to both our Transparency Report and our Twitter account. In the event we are restricted from releasing it, we make use of our warrant canary.

5. We respond that we are unable to identify any of our users, but that our premises are open for inspection by any forensic expert. We also inform our members through Twitter and our transparency report about the situation. In case we are unable to speak, we make use of our warrant canary and warn our users that we updated the latter. Finally, we make sure to drop the VPN node as soon as it is possible. This has happened once.

6. Absolutely. We do not discriminate any traffic type.

7. We accept more than 100 various payment methods & crypto-currencies. Our gateways are G2A, SafeCharge, Paymentwall, Okpay, Blockchain and eDigiCash. There is no recurring subscription, and all billing information is processed by the gateways: the only information we retain is a transaction ID and the e-mail address of the user account.

8. For maximum stealth, we recommend our RSA 4096-bit + TOR’s obfsproxy (obfs4) integration. And for encryption strength, we recommend ECC + XOR (secp384r1). Both are available directly within our custom-made, open-sourced OpenVPN client.

9. Safejumper, our open-sourced OpenVPN client, gives you protection against both DNS and IPv6 leaks. It also comes with a robust “kill switch” that literary kills the network interface if the connection drops. Our various web apps also test other potential leaks such as GPS or WebRTC and teach you how to fix them. We also have an extended literature to help you fix any leaks (DNS, IPv6, torrent, etc.) manually.

10. Safejumper, our custom-built OpenVPN client, is made fully open source on Github, and it is available on Windows, Mac, Linux, Android and iOS.

11. Of course, we run our own OpenNIC-compliant DNS servers. Also, we use our own physical servers in friendly data centers for our core services and our biggest VPN nodes. Our VPN network is also supplemented with a variety of bare-metal dedicated servers or virtual private servers across the world.

12. We provide VPN nodes in 57 countries, across more than 300 locations.

Proxy.sh website

VPN Secure

vpns1. We do not keep any logs.

2. VPNSecure Trust. Australia.

3. Google Analytics / Zendesk chat. Email servers and support system is hosted in-house.

4. We do not keep information on our users and are unable to identify the user belonging to the notice.

5. We provide the information we can correlate from the court order, which is zero. Because we do not log the information pointing to an IP address of our servers, it does not denote a specific user. Users are provided shared IPs so traffic is mixed between them.

6. We allo P2P. Previously P2P was not allowed on *some* servers, however we have migrated away from these locations.

7. Bitcoin / Perfect Money / PayPal / Credit Card / PaymentWall, if we need to look at a payment we receive this information by asking the customer to determine which payment is theirs.

8. We have multiple cipher options, AES-256-CBC & 2048bit encrypted unique keys per user account along with our Stealth VPN option.

9. We block IPv6 in multiple places, DNS servers do not respond to IPv6 records along with blocking at the OS level. We also provide UDP blocking which protects P2P users. DNS Leak fix is on by default.

10. Yes, we have our own OpenVPN application for Linux / Windows / MacOS X / Android / iPhone.

11. The main infrastructure is colocated and owed by VPNSecure, remote endpoints are leased servers, these are configured with encrypted folders meaning any third-party that tried to access the server would be unable to access any VPN specific information. VPNSecure looks after all infrastructure and VPN endpoints internally, we do not out-source this.

12. 47+ Countries

VPN Secure website

SecureVPN.to

SecureVPNto1. We don’t log any individually identifying information. The privacy of our customers is our top priority. Our service has been awarded with the first and up to now the only “Privacy badge” by an independent review of That One Privacy Guy.

2. Our service is operated by a group of autonomous privacy activists outside of “Fourteen Eyes” or “Enemy of the Internet” countries. Each server is handled with the jurisdiction at the server’s location.

3. Our website has been developed by ourselves and we don’t use any external service providers.

4. We reply to takedown notices, but can’t be forced to hand out information because of our non-logging policy.

5. This hasn’t happened yet, but if we were forced to identify any of our customers at a specific server location, we would drop this location immediately. Under no circumstances are we going to log, monitor or share any information about our customers.

6. Yes, it is allowed and treated equally on all servers.

7. We offer a wide range of anonymous payment methods like Bitcoin, Dash, Ethereum, Paysafecard and Perfect Money. No external payment processor receives any information because all payments are processed by our own payment interface.

8. We would recommend OpenVPN, available in UDP and TCP mode. We are using AES-256-GCM (OpenVPN 2.4.*) / AES-256-CBC (OpenVPN 2.3.*) for traffic encryption, 4096 bit RSA keys for the key exchange and SHA-512 as HMAC. These settings offer you the highest grade of security available.

9. We fully support IPv6 internet connections. Our homemade VPN Client provides advanced security features like a Kill Switch, DNS Leak Protection, IP Leak Protection, IPv6 Leak Protection, WebRTC Leak Protection and many more.

10. Our VPN Client is available for Windows and doesn’t store any logs. We plan to offer a version for Linux, Mac and mobile devices.

11. We rent 35 servers in 25 countries and are continuously expanding our server park. It is impossible to have physical control over all widespread servers, but we have taken security measures to prevent unintended server access. At the moment we are using excellent anycast nameservers of UltraDNS.

12. You can find our server list under the following link.

SecureVPN.to website

ibVPN

ibvpn1. We do not spy on our users and we don’t monitor their Internet usage. We do not keep logs with our users’ activity.

2. Company’s registered name is Amplusnet SRL. We are located in Romania, which means we are under EU jurisdiction.

3. For the presentation part of our web site (the front end) we are using Google Analytics & Google Translate and CDN. Occasionally we are running A/B tests and promotional campaigns that might involve using third party tools like optimizely / marketizator / picreel.

For the secure part of our web site (the back end) we do not use external e-mail providers (we host our own mail server) and we host a dedicated WHMCS installation for billing and support tickets.

To provide quick support and a user-friendly service experience, our users can contact us via live chat (Zopim) but activity logs are deleted on a daily basis. There is no way to associate any information provided via live chat with the users’ account.

4. So far we have not received any DMCA notices for any P2P server from our server list. That is normal considering that the servers are located in DMCA-free zones. Before we allow our clients to use a P2P server we test it for several months in order to make sure that the speeds are fine and we do not receive any complaints from the server provider. For the rest of the servers, P2P and file sharing activities are not allowed/supported.

5. So far, we have not received any valid court orders. As stated in our TOS, we do not support criminal activities, and in case of a valid court order we must comply with the EU law under which we operate.

6. We allow BitTorrent and other file-sharing traffic on specific servers located in Netherlands, Luxembourg, Canada, Sweden, Russia, Hong Kong, Lithuania, Bulgaria and Ukraine. Based on our legal research, we consider that it is NOT safe for our users to allow such activities on servers located, for example, in the United States or United Kingdom.

7. We accept various payment methods like Credit cards, PayPal, prepaid credit cards, Payza, SMS, iDeal, OOOPay and many more. Payments are performed exclusively by third party processors, thus no credit card info, PayPal ids or other identification info are stored in our database. For those who would like to keep a low profile, we accept BitCoin, LiteCoin, WebMoney, Perfect Money etc.

8. The most secure VPN connection is Open VPN, which provides 256 bit Blowfish algorithm encryption. We also support SSTP and SoftEther on most of the servers.

9. A Kill Switch has been implemented with our VPN Clients. When enabled, the Kill Switch closes all applications (that are running and have been added to the Kill Switch app list) in case of an unwanted VPN disconnection. Our latest applications allow customers to disable IPv6 Traffic, to make sure that only our DNS servers are used while connected to the VPN and there is an option that filter the DNS requests by using the firewall – to avoid leaks.

10. We currently provide custom VPN apps for Windows, Android, iOS and Mac OS X. We also offer browser extensions (cross-platform) for Chrome and Firefox that are able to route the HTTP and HTTPS requests.

11. We do not have physical control over our VPN servers, but we have full control to them and all servers are entirely managed personally by our technical staff. Admin access to servers is not provided for any third party.

12. Our servers are located in dozens of countries. A full list is available here.

ibVPN website

Trust.Zone

trustzone 1. Trust.Zone doesn’t store any logs. All we need from users is just an email to sign up. No names, no personal info, no tracking, no logs.

2. Trust.Zone is under Seychelles jurisdiction and we operate according to law in Seychelles. There is no mandatory data retention law in Seychelles. In our jurisdiction a court order would not be enforceable and since we don’t store any logs , there is nothing to be taken from our servers. The company is operated by Extra Solutions Ltd.

3. Trust.Zone does not use any third-party support tools, tracking systems like Google Analytics or live chats that hold user information.

4. If we receive any type of DMCA requests or Copyright Infringement Notices – we ignore them. Why? Trust.Zone is under Seychelles offshore jurisdiction. There is no mandatory data retention law in Seychelles. The laws of Seychelles are very friendly to Internet users. Under Seychelles jurisdiction a court order would not be enforceable and since we don’t store any logs, there is nothing to be had from our servers.

5. A court order would not be enforceable because we do not log information and therefore there is nothing to be had from our servers. Trust.Zone is a VPN provider with a Warrant Canary. Trust.Zone has not received or has been subject to any searches, seizures of data or requirements to log any actions of our customers.

6. We don’t restrict any kind of traffic. Trust.Zone does not throttle or block any protocols, IP addresses, servers or any type of traffic whatsoever. Trust.Zone is recommended to use as the “best vpn for torrenting” by the biggest Bittorent websites in the world – ExtraTorrrent (#2 – according to TorrentFreak), 1337x.to (#6) and TORRENTZ2 (#5 – according to TorrentFreak).

7. All major credit cards are accepted. Besides, Bitcoin, PayPal, Webmoney, Alipay, wire transfer and many other types of payments are available. To stay completely anonymous, we highly recommend using anonymous payments via Bitcoin. Trust.Zone offers 10% OFF for everyone who pays with Bitcoin.

No logs, no names, offshore jurisdiction, and anonymous payments – we’re trying to do all the best for our users to get their freedom on the Internet back.

8. Trust.Zone uses the highest level of data encryption. We use a protocol which is faster than OpenVPN and also includes Perfect Forward Secrecy (PFS). The most unique feature of Trust.Zone VPN is that you can forward your VPN traffic via ports – 21 (SCP, SFTP), 22 (FTP), 80 (HTTP), 443 (HTTPS) or 1194 (OpenVPN), most of which can’t be blocked by your ISP. Trust.Zone uses AES-256 Encryption by default. We also offer L2TP over IPsec which also uses 256bit AES Encryption.

9. Trust.Zone offers a kill-switch. Trust.Zone has no support for IPv6 connections to avoid any leaks. We also provide users with additional recommendations to be sure that there are no any DNS or IP leaks.

10. Trust.Zone provides users with one-click, easy-to-use application for Windows. Trust.Zone supports all major OS and devices – Windows, iOS, Android, Linux, Windows Mobile, Mac, DD-WRT routers and other OpenVPN compatible devices.

11. We have a mixed infrastructure. Trust.Zone owns some physical servers and we have access to them physically. In locations with lower utilization, we normally host with third parties. But the most important point is that we use dedicated servers in this case only, with full control by our network administrators. DNS queries go through our own DNS servers. We also may use Google DNS depending on platform.

12. We are operating with 100+ servers in 30+ countries and still growing. The full map of the server locations is available here.

Trust.Zone website

Doublehop

doublehop1. Zero, zip, zilch, nada. For realsies, /dev/null 2>&1. We have nothing to share with authorities, even if we felt compelled to.

2. We’re incorporated as Doublehop GmbH in the Seychelles. We operate as Doublehop.me, Doublehop, and Doublehop VPN.

3. We do not use any external visitor tracking services such as AdSense. We use Mandrill to deliver email automatically when orders are placed. In the interest of full disclosure, please be advised that Mandrill provides analytical statistics relating to email (e.g., open rates and clicks). We disable these features unless we are doing web development and need to quickly confirm that changes do not impact email delivery.

We also permit registration via Telegram Messenger as a more secure alternative to email. A Telegram message is automatically sent to confirm an order and payment. We use Amazon S3 to provide access to client certificates. Files are protected in transit by TLS and at rest by server-side encryption.

4. Not applicable. To quell overofficious legal demands, all legal complaints and requests (DMCA, Trademark, Defamation, Court Order, Law Enforcement, Private Information, Data Protection, Government, etc) are forwarded to Lumen.

5. We’ll respond with one-liners from Fifty Shades 😀 We have nothing to share with authorities, even if we felt compelled to. If we run into trouble, we’ll stop updating our Warrant Canary.

6. Yes, P2P is permitted on all Doublehop VPN servers and treated equally to other traffic, although we encourage our users to avoid using USA-based exit nodes for such traffic. For example, it’s better to connect to USA as a Doublehop VPN entry node, and exit Netherlands than it is to connect to Netherlands as an entry node, exiting USA.

7. Doublehop’s only accepted payment method is Bitcoin. Since we do not require our clients to reveal their identity to use our services, paying with Bitcoin offers privacy when used properly. A new Bitcoin address is generated for each order, and monitored for 72 hours before being scrubbed from the order details.

8. Our users VPN to Country_A, and we route them over an encrypted interconnection to another data center; the traffic then exits Country_B. We use a modern cipher (AES-256-CBC) between clients and nodes, with RSA-4096 for key exchange/certs, and force client use of TLS >=1.2 with the tls-version-min OpenVPN directive. We have h/w crypto acceleration on all our boxes.

Our VPN clients see: Cipher ‘AES-256-CBC’ initialized with 256 bit key. TLSv1.2, cipher TLSv1/SSLv3 DHE-RSA-AES256-GCM-SHA384, 4096 bit RSA

9. We don’t provide custom tools for good reason (see Q.10 for more info). DNS leak protection is best handled by the hosts file or by pushing OpenVPN options to clients. We use OpenVPN options to offer DNS leak protection (which Windows 10 is prone to).

10. No, the standard OpenVPN client is more transparent and open to peer review. Some VPN providers offer custom software that can introduce security issues or store connection logs. We provide configs for Linux, Android, iOS, Mac OS X (Viscosity), and Windows Vista+.

11. We use dedicated servers that employ RAM disks, software based full disk encryption, or hardware-based full disk encryption, depending on their role and specifications. This ensures that any intervention from a provider won’t assist in any investigation. Traffic between nodes is multiplexed, defeating passive correlation. And furthermore, Doublehop VPN doubles security and privacy with double hops across multiple legal jurisdictions, to disrupt potential investigations.

Our clients are permitted to use whichever DNS they’re most comfortable with! By default, we use Google DNS to ensure that users receive localized content from the exit node chosen. This is especially important when it comes to streaming (e.g., Netflix, Pandora) from a USA exit node. We’re looking to add a SmartDNS and DNSCrypt server in the near future to provide additional options for our clients.

12. Netherlands, Spain, Finland, Canada, and USA, all configured as Doublehop VPN pairs (Map).

Doublehop website

ShadeYou VPN

shadeyou1. ShadeYou VPN does not keep any logs. The highest level of privacy is a main mission of ShadeYou VPN. To use our service only a username and e-mail are required. No personal or real data is required.

2. We are incorporated as DATA ACCENTS LP and operate under jurisdiction of United Kingdom.

3. We are using Google Analytics as a tool which allow us to improve our website and bring our users better experience. Also we are using SiteHeart online support. But none of these tools track / hold personal information.

4. The abuse team of ShadeYou VPN answers as follows: a) we do not store any illegal content on our servers. b) every our user agrees with our privacy policy while registering, so we warned that illegal actions are prohibited and at this time we are not responsible. c) we have no any personal data of our users or any logs of their activities that can be shared with third-parties because we simple do not store it.

5. Sharing any personal data of our users is absolutely impossible since we do not store it and do not keep any logs. Yes such kind of situation has happened but there is not even one existing case when we have shared any information about our users with any 3rd parties.

6. BitTorrent and any other file-sharing traffic is allowed mostly on all our servers. There’s only a few exceptions (such as when traffic is limited on the servers).

7. ShadeYou VPN uses payment systems including PayPal, Perfect Money, Webmoney, Qiwi, Yandex Money, Easy Pay, Ligpay, UnionPay, AliPay, MINT, CashU, Ukash also accept payments via Visa, Master Card, Maestro and Discover. Of course Bitcoin is available.

8. We strongly recommend to use OpenVPN since it is the most safe and uses the strongest encryption (TLS Protocol with 4096-bit key length and AES-256-CBC crypto-algorithm).

9. We are not working with IPv6 at the moment but we are working on it. We support “Kill switches” and DNS leak protection using our desktop client.

10. Yes, we offer our own application which is available on the Windows OS. It is very simple and easy-to-use. Mobile clients are developing at the moment.

11. All our servers are collocated around the world in DC’s of different leading hosting companies. Yes, we are using our own DNS servers.

12. Here is an overview.

ShadeYou VPN website

oVPN.to

ovpnto1. Short answer: No! We don’t create or keep any logs.

2. We’re not a company and we operate under no jurisdiction. Servers are running under their local jurisdiction and have to follow local laws.

3. No.

4. We check portforwards, close them and send the notice into your account.

5. We have never received any valid court order or subpoena. Anyways, we’re unable to identify our clients and we’d shutdown affected servers when needed.

6. BitTorrent and other file-sharing traffic is allowed, but we’d recommend downloading from Usenet.

7. We support Bitcoin, Litecoin and other Crypto Currency, WebMoney.ru, PerfectMoney.is and some pre-paid vouchers. No references are left after transaction with Crypto Currency and you can always ask us to update your payment id.

8. We recommend AES-256-CBC/GCM cipher and HMAC SHA512 with 4096 bit certificates, standard setting on all servers.

9. Yes, we support Linux with IPtables and Windows with our Client Software.

10. We offer an open-source Client for Windows and headless API script for Linux.

11. We use rented dedicated servers from different providers and we provide are own DNS servers.

12. Check our server page.

oVPN.to website

CactusVPN

cactus1. We don’t keep any logs.

2. CactusVPN Inc., Canada

3. No.

4. We have not received any official notices yet. We will only respond to local court orders.

5. If we have a valid order from Canadian authorities we have to help them identify the user. But as we do not keep any logs we just can’t do that. We have not received any orders yet.

6. Yes, it is allowed on Dutch and Romanian servers.

7. PayPal, Credit Card, BitCoin and a list of other not so popular payment options.

8. We recommend users to use SoftEther with ECDHE-RSA-AES128-GCM-SHA256 cipher suite.

9. Yes, we have these features. For now we do not support IPv6. We recommend our clients to disable IPv6 when they use the VPN service. with the current version of CactusVPN software for Windows, we implemented a feature that disables IPv6 automatically when the VPN is connecting and reenable it when VPN is disconnecting.

10. We have VPN apps for Windows, MacOS, iOS and Android.

11. We use servers from various Data centers.

12. US, UK, Canada, Netherlands, Germany, France, Romania.

CactusVPN website

VPN providers With Some Logs (max 7 days)

VPN.ac

vpnac1. We keep connection logs for 1 day to help us in troubleshooting customers’ connection problems but also to identify attacks (e.g. bruteforce, account theft). This information contains IP address, connection start and end time, protocol used (including port) and amount of data transferred.

2. Netsec Interactive Solutions SRL, registered in Romania.

3. No.

4. We are handling DMCA complaints internally without involving the users (i.e. we are not forwarding anything). We use shared IP addresses so it’s not possible to identify the users.

5. It has never happened. In such an event, we would rely on legal advice.

6. It is allowed.

7. Bitcoin, PayPal, Credit/Debit cards, Perfect Money, pre-paid voucher cards and more.

8. OpenVPN using Elliptic Curve Cryptography for Key Exchange (ECDHE, curve secp256k1) is used by default in most cases. We also support and RSA-4096, SHA256 and SHA512 for digest/HMAC. For data encryption we use AES-256-GCM and AES-128-GCM.

9. Our client software can block IPv6 traffic. DNS leak protection is forced by default and it’s not optional. A ‘kill switch’ is available with our client software.

10. We offer clients for Windows, MacOS, Android, iOS, Linux (still in beta), as well as browser addons for Chrome, Firefox, Opera.

11. We have physical control of our servers in Romania. In other countries we rent or collocate our hardware. We have some measures in place to prevent and alert us in case of unauthorized physical access. We use our own DNS resolvers and we encrypt all DNS queries from VPN gateways to DNS resolvers.

12. Locations are listed in real-time here.

VPN.ac website

OneVPN

1. The answer to this question is 50% ‘Yes’ and 50% ‘No’.

We do not keep any log of users’ original IP which can lead anyone to their physical location. We do maintain the login and logout time against the client area username for the bandwidth usage by user. We keep the bandwidth usage data only for 7 days until the money-back guarantee is valid. On the eight day, we discard the data and keep zero logs of our paid users.

2. OneVPN is a product of Unravel Technologies, a Hong Kong based registered company.

3. No, we do not use any external visitor trackers or support tools. For communicating with customers regarding their initial credentials, inquires, support tickets and complaints, we have a customized in-house emailing portal. For sending marketing emails and Newsletters, we use Sandy (Amazon based email portal).

4. Based in Hong Kong, we are not bound by any law to keep logs of our users. The only information we have about our client is the login and logout time. This information can only lead towards OneVPN’s server which the user might have connected to. We cannot provide any further information because we do not have any. If any DMCA or other notice landed to us, all we can provide them is login and logout time.

5. First of all, in all ten months of our operations it has never happened to us. If any such scenario arises in future, we cannot identify the user as we do not have any logs of our user’s identity. We can only lead to the server the user once connected to.

6. Yes, BitTorrent and other file-sharing traffic is allowed on all OneVPN’s servers except USA, Canada, and Australia. We have physical servers and in the aforementioned countries P2P file sharing is not allowed. Users can connect to Netherlands, Germany, France, or any other server to download torrents.

7. We offer PayPal, Web Money, BitCoin, and Credit Card option via third party merchants. The user visits OneVPN’s website and selects the payment method. Once the payment is made by the user, it goes directly to the Payment merchant. The Payment Merchant verifies the payment and gives OneVPN the go head.

We send the credentials for the user via email. Once the payment is through it lands in OneVPN’s account. In the entire process, the user provides the required information to the Payment Merchant and not to OneVPN. This way every user is anonymous to us.

8. OneVPN is among the few VPN providers to offer Openconnect via Cisco Anyconnect. This protocol helps users to achieve the highest level of security with 256bit AES encryption and the fastest speed at the same time. We highly recommend all our users to use Openconnect protocol.

9. We do not support IPV6. Hence, it eliminates all vulnerabilities associated with IPV6 leaks. The best and most recommended way to avoid IPv6 leaks is to disable the functionality from your desktop interface. Yes, we do provide DNS leak protection and NAT firewall comes with all our VPN servers. The user does not need any manual configuration or prior setting for DNS leak protection. We also provide an Internet Kill Switch feature in our Windows App. You can also configure the Kill Switch option on your Mac while using OneVPN.

10. We offer customized VPN apps for Windows, Mac, and Android.

11. We have selected the best data centers to host our servers. We operate 100% physical servers which all run on BSD. All our VPN servers have their own DNS.

12. We have 60+ VPN servers deployed in 20 countries. You can check the complete list of all the locations here.

OneVPN website

IronSocket

ironsocket1. We keep limited session logs for all of our services. These logs record the duration of a connection, the IP address used for the connection and the number of bytes transferred.

These logs are typically kept for 72 hours, usually less, after which they are purged. We log this data for fraud and abuse detection/prevention. Since we use shared IPs on our servers, and do not log activity, it is difficult to associate specific activity with individual users.

2. IronSocket is owned and operated by Pusa and Daga Hong Kong Limited in the jurisdiction of the Hong Kong Special Administrative Region.

3. We do not use any third-party email providers or support tools. We use Google Analytics and HasOffers which have minimal visitor tracking information used for website usage reporting and management of our affiliate program, respectively.

4. IronSocket is not subject to the DMCA or any international equivalent. We do NOT host any user uploaded content on any of our servers. While IronSocket is not subject to DMCA, some of our hosting and data center partners reside in locations that are. If they escalate a DMCA notice to us, we reply to the provider that we are a service provider like them, and that we do not log our user’s activity.

5. This has not happened. It is our policy to cooperate with legal orders that are valid under Hong Kong SAR law. The process to address such request is:

1. Verify the order is legal and valid
2. Consult with legal counsel to determine what we are required to provide
3. Determine if we have the data being requested

Because of our privacy policy, terms of service, shared IP usage, and anonymous payment methods, it would be difficult to impossible to associate a specific activity with an individual user.

6. P2P traffic is allowed on servers in countries where such traffic is not restricted. We do not allow P2P on all servers due to the legal pressure on the data centers in certain regions of the world. All traffic is treated equally on our network.

7. We accept credit / debit card payments via SafeCharge and PayPal. Bitcoin transactions are processed by BitPay and major US brand gift cards are handled by PayGarden. We do not collect sensitive payment information. Any sensitive payment information is maintained by each respective payment processor and is linked by a unique transaction number.

8. OpenVPN with strong encryption: AES 256-bit encryption with SHA256 message authentication, using a 4096-bit key for secure authentication.

9. We are currently beta testing a new client for Microsoft Windows systems that offers DNS leak protection and VPN drop protection. VPN drop protection has the option of killing specific applications or the system’s network connection.

10. We are currently beta testing a new client for Microsoft Windows systems that offers support for the OpenVPN, L2TP, and PPTP VPN protocols.

11. We host and maintain our own DNS servers. We manage all our VPN servers but they are hosted and maintained by third-party data centers. We vet all providers prior to engaging their services and we continuously evaluate the quality of service and responsiveness to our requirements and requests.

12. We have hundreds of servers in 38 different countries and are always adding more. The most up-to-date list can be found here.

IronSocket website

Seed4.me

seed4me1. We do not analyze or DPI traffic. We also do not keep logs on VPN nodes. General connection logs are stored on a secure server for 7 days to solve network issues if there are any. These logs are deleted after seven days if there are no network problems.

2. Taiwan. Seed4.Me Inc., We are not aware of any legislation requiring us to share client information and we are not aware of any precedents in Taiwan where client information was disclosed. We do not hold much information anyway. On the other hand, we do not welcome illegal activities which potentially harm other people.

3. Currently we utilize Google Analytics and G Suite (ex. Google Apps). Regarding G Suite, we do not store any sensitive information there, only support issues.

4. In case of abuse, we null route the IP to keep ourselves in compliance with the DMCA. Currently we use simple firewall rules to block torrents in countries where DMCA applies.

5. We will act in accordance with the laws of the jurisdiction, only if a court order comes from a jurisdiction where the affected server is located. Fortunately, as I said before, we do not keep any logs on VPN nodes, on the other hand we do not encourage illegal activity. This has never happened.

6. Torrents are allowed on our VPN servers in Switzerland and Sweden. These are torrent-friendly countries with high-quality data centers and network. We treat BitTorrent, P2P, streaming and any other traffic equally on all servers.

7. We accept Bitcoin, PayPal, Visa, MasterCard, Webmoney, QIWI, Yandex.Money, Bank transfer and In-App purchases in our mobile apps. We do not store sensitive payment information on our servers, in most cases payment system simply sends us a notification about successful payment with the amount of payment. We validate this data and grant access to VPN. BTW, we do not require the name of the card holder when he pays for the VPN in our desktop app.

8. Obfuscated OpenVPN with 2048-bit key will be a good choice, it’s available in our Desktop and Android apps. Also, our iOS App has Automatic protection option that guarantees for example that all outgoing connections on open Wi-Fi will be encrypted and passed through secure VPN channel.

9. We do provide DNS leak protection in our Desktop app and we suggest that customers turn off IPv6 support. We don’t provide a kill switch for desktop yet. We are still compatible with free software that prevents unsecured connections after VPN connection goes down.

10. We have apps for Windows, iOS, Android and Amazon Kindle.

11. All servers are remotely administered by our team only, no outsourcing. No data is stored on VPN nodes (if the node is confiscated, there will not be any data). We prefer to deal with trustworthy Tier-3 (PCI-DSS) data centers and providers to ensure reliable service with high security. As for DNS, we use Google, users can override these settings with their own.

12. Currently we offer VPN nodes in 17 countries: USA, UK, Canada, France, Russia, Switzerland (torrent-friendly), Sweden (torrent-friendly), Ukraine, Netherlands, Spain, Germany, Italy, India, Hong Kong, Singapore, Israel and South Korea.

Seed4.me website

Note: several of the providers listed in this article are TorrentFreak sponsors.

—–

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Baby, you’re a (legal, indoor) firework

$
0
0

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/legal-indoor-firework/

Dr Lucy Rogers is more than just a human LED. She’s also an incredibly imaginative digital maker, ready and willing to void warranties in her quest to take things apart and put them back together again, better than before. With her recipe for legal, digital indoor fireworks, she does exactly that, leaving an electronic cigarette in a battered state as it produces the smoke effects for this awesome build.

Firecracker Demo Video

Uploaded by IBM Internet of Things on 2017-02-28.

In her IBM blog post, Lucy offers a basic rundown of the build. While it may not be a complete how-to for building the firecrackers, the provided GitHub link and commentary should be enough for the seasoned maker to attempt their own version. If you feel less confident about producing the complete build yourself, there are more than enough resources available online to help you create something flashy and bangy without the added smoke show.

Lucy Rogers Firecracker Raspberry Pi

For the physical build itself, Lucy used a plastic soft drink bottle, a paper plate, and plastic tubing. Once painted, they provided the body for her firecrackers, and the support needed to keep the LED NeoPixels in place. She also drilled holes into the main plastic tube that ran up the centre of the firecracker, allowing smoke to billow out at random points. More of that to come.

Lucy Rogers Firecracker Raspberry Pi

Spray paint and a touch of gold transform the pieces of plastic piping into firecrackers

The cracking, banging sounds play via a USB audio adapter due to complications between the NeoPixels and the audio jack. Lucy explains:

The audio settings need to be set in the Raspberry Pi’s configuration settings (raspi-config). I also used the Linux program ‘alsamixer’ to set the volume. The firecrackers sound file was made by Phil Andrew. I found that using the Node-RED ‘exec node’ calling the ‘mpg123’ program worked best.

Lucy states that the hacking of the e-cigarette was the hardest part of the build. For the smoke show itself, she reversed its recommended usage as follows:

On an electronic cigarette, if you blow down the air-intake hole (not the outlet hole from which you would normally inhale), smoke comes out of the outlet hole. I attached an aquarium pump to the air-intake hole and the firecracker pipe to the outlet, to make smoke on demand.

For the power, she gingerly hacked at the body with a pipe cutter before replacing the inner LiPo battery with a 30W isolated DC-DC converter, allowing for a safer power flow throughout the build (for “safer flow”, read “less likely to blow up the Raspberry Pi”).

Lucy Rogers internal workings Firecracker Raspberry Pi

The pump and e-cigarette fit snugly inside the painted bottle, while the Raspberry Pi remains outside

The project was partly inspired by Lucy’s work with Robin Hill Country Park. A how-to of that build can be seen below:

Dr Lucy Rogers Electronic Fire Crackers

www.farnell.com Dr Lucy Rogers presents her exciting Fire Crackers project, taking you from the initial concept right through to installation. Whilst working in partnership with the Robin Hill country park on the Isle of Wight, Lucy wanted to develop a solution for creating safe electronic Fire Crackers, for their Chinese New year festival.

Although I won’t challenge you all to dismantle electric cigarettes, nor do I expect you to spend money on strobe lights, sensors, and other such peripherals, it would be great to see some other attempts at digital home fireworks. If you build, or have built, anything flashy and noisy, please share it in the comments below.

The post Baby, you’re a (legal, indoor) firework appeared first on Raspberry Pi.

Announcing the AWS Health Tools Repository

$
0
0

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/announcing-the-aws-health-tools-repository/

Tipu Qureshi and Ram Atur join us today with really cool news about a Git repository for AWS Health / Personal Health Dashboard.

-Ana


Today, we’re happy to release the AWS Health Tools repository, a community-based source of tools to automate remediation actions and customize Health alerts.

The AWS Health service provides personalized information about events that can affect your AWS infrastructure, guides you through scheduled changes, and accelerates the troubleshooting of issues that affect your AWS resources and accounts.  The AWS Health API also powers the Personal Health Dashboard, which gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources. You can use Amazon CloudWatch Events to detect and react to changes in the status of AWS Personal Health Dashboard (AWS Health) events.

AWS Health Tools takes advantage of the integration of AWS Health, Amazon CloudWatch Events and AWS Lambda to implement customized automation in response to events regarding your AWS infrastructure. As an example, you can use AWS Health Tools to pause your deployments that are part of AWS CodePipeline when a CloudWatch event is generated in response to an AWS Health issue.

AWSHealthToolsArchitecture

The AWS Health Tools repository empowers customers to effectively utilize AWS Health events by tapping in to the collective ingenuity and expertise of the AWS community. The repository is free, public, and hosted on an independent platform. Furthermore, the repository contains full source code, allowing you to learn and contribute. We look forward to working together to leverage the combined wisdom and lessons learned by our experts and experts in the broader AWS user base.

Here’s a sample of the AWS Health tools that you now have access to:

To get started using these tools in your AWS account, see the readme file on GitHub. We encourage you to use this repository to share with the AWS community the AWS Health Tools you have written

-Tipu Qureshi and Ram Atur

How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network

$
0
0

Post Syndicated from Shawn Marck original https://aws.amazon.com/blogs/security/how-to-protect-your-web-application-against-ddos-attacks-by-using-amazon-route-53-and-a-content-delivery-network/

Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper.

In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to your externally hosted content delivery network (CDN) distribution.

Background

When browsing the Internet, a user might type example.com instead of www.example.com. To make sure these requests are routed properly, it is necessary to create a Route 53 alias resource record set for the zone apex. For example.com, this would be an alias resource record set without any subdomain (www) defined. With Route 53, you can use an alias resource record set to point www or your zone apex directly at a CloudFront distribution. As a result, anyone resolving example.com or www.example.com will see only the CloudFront distribution. This makes it difficult for a malicious actor to find and attack your application origin.

You can also use Route 53 to route end users to a CDN outside AWS. The CDN provider will ask you to create a CNAME alias resource record set to point www.example.com to your CDN distribution’s hostname. Unfortunately, it is not possible to point your zone apex with a CNAME alias resource record set because a zone apex cannot be a CNAME. As a result, users who type example.com without www will not be routed to your web application unless you point the zone apex directly to your application origin.

The benefit of a secure redirect from the zone apex to www is that it helps protect your origin from being exposed to direct attacks.

Solution overview

The following solution diagram shows the AWS services this solution uses and how the solution uses them.

Diagram showing how AWS services are used in this post's solution

Here is how the process works:

  1. A user’s browser makes a DNS request to Route 53.
  2. Route 53 has a hosted zone for the example.com domain.
  3. The hosted zone serves the record:
    1. If the request is for the apex zone, the alias resource record set for the CloudFront distribution is served.
    2. If the request is for the www subdomain, the CNAME for the externally hosted CDN is served.
  4. CloudFront forwards the request to Amazon S3.
  5. S3 performs a secure redirect from example.com to www.example.com.

Note: All of the steps in this blog post’s solution use example.com as a domain name. You must replace this domain name with your own domain name.

AWS services used in this solution

You will use three AWS services in this walkthrough to build your zone apex–to–external CDN distribution redirect:

  • Route 53 – This post assumes that you are already using Route 53 to route users to your web application, which provides you with protection against common DDoS attacks, including DNS query floods. To learn more about migrating to Route 53, see Getting Started with Amazon Route 53.
  • S3 – S3 is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web. S3 also allows you to configure a bucket for website hosting. In this walkthrough, you will use the S3 website hosting feature to redirect users from example.com to www.example.com, which points to your externally hosted CDN.
  • CloudFront – When architecting your application for DDoS resiliency, it is important to protect origin resources, such as S3 buckets, from discovery by a malicious actor. This is known as obfuscation. In this walkthrough, you will use a CloudFront distribution to obfuscate your S3 bucket.

Prerequisites

The solution in this blog post assumes that you already have the following components as part of your architecture:

  1. A Route 53 hosted zone for your domain.
  2. A CNAME alias resource record set pointing to your CDN.

Deploy the solution

In this solution, you:

  1. Create an S3 bucket with HTTP redirection. This allows requests made to your zone apex to be redirected to your www subdomain.
  2. Create and configure a CloudFront web distribution. I use a CloudFront distribution in front of my S3 web redirect so that I can leverage the advanced DDoS protection and scale that is native to CloudFront.
  3. Configure an alias resource record set in your hosted zone. Alias resource record sets are similar to CNAME records, but you can set them at the zone apex.
  4. Validate that the redirect is working.

Step 1: Create an S3 bucket with HTTP redirection

The following steps show how to configure your S3 bucket as a static website that will perform HTTP redirects to your www URL:

  1. Open the AWS Management Console. Navigate to the S3 console and create an S3 bucket in the region of your choice.
  2. Configure static website hosting to redirect all requests to another host name:
    1. Choose the S3 bucket you just created and then choose Properties.
      Screenshot showing choosing the S3 bucket and the Properties button
    2. Choose Static Website Hosting.
      Screenshot of choosing Static Website Hosting
    3. Choose Redirect all requests to another host name, and type your zone apex (root domain) in the Redirect all requests to box, as shown in the following screenshot.
      Screenshot of Static Website Hosting settings to choose

Note: At the top of this tab, you will see an endpoint. Copy the endpoint because you will need it in Step 2 when you configure the CloudFront distribution. In this example, the endpoint is example-com.s3-website-us-east-1.amazonaws.com.

Step 2: Create and configure a CloudFront web distribution

The following steps show how to create a CloudFront web distribution that protects the S3 bucket:

  1. From the AWS Management Console, choose CloudFront.
  2. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  3. The Create Distribution page has many values you can specify. For this walkthrough, you need to specify only two settings:
    1. Origin Settings:
      • Origin Domain Name –When you click in this box, a menu appears with AWS resources you can choose. Choose the S3 bucket you created in Step 1, or paste the endpoint URL you copied in Step 1. In this example, the endpoint is example-com.s3-website-us-east-1.amazonaws.com.
        Screenshot of Origin Domain Name
    1. Distribution Settings:
      • Alternate Domain Names (CNAMEs) – Type the root domain (for this walkthrough, it is www.example.com).
        Screenshot of Alternate Domain Names
  4. Click Create Distribution.
  5. Wait for the CloudFront distribution to deploy completely before proceeding to Step 3. After CloudFront creates your distribution, the value of the Status column for your distribution will change from InProgress to Deployed. The distribution is then ready to process requests.

Step 3: Configure an alias resource record set in your hosted zone

In this step, you use Route 53 to configure an alias resource record set for your zone apex that resolves to the CloudFront distribution you made in Step 2:

  1. From the AWS Management Console, choose Route 53 and choose Hosted zones.
  2. On the Hosted zones page, choose your domain. This takes you to the Record sets page.
    Screenshot of choosing the domain on the Hosted zones page
  3. Click Create Record Set.
  4. Leave the Name box blank and choose Alias: Yes.
  5. Click the Alias Target box, and choose the CloudFront distribution you created in Step 2. If the distribution does not appear in the list automatically, you can copy and paste the name exactly as it appears in the CloudFront console.
  6. Click Create.
    Screenshot of creating the record set

Step 4: Validate that the redirect is working

To confirm that you have correctly configured all components of this solution and your zone apex is redirecting to the www domain as expected, open a browser and navigate to your zone apex. In this walkthrough, the zone apex is http://example.com and it should redirect automatically to http://www.example.com.

Summary

In this post, I showed how you can help protect your web application against DDoS attacks by using Route 53 to perform a secure redirect to your externally hosted CDN distribution. This helps protect your origin from being exposed to direct DDoS attacks.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this blog post, start a new thread in the Route 53 forum.

– Shawn


AWS Database Migration Service – 20,000 Migrations and Counting

$
0
0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-database-migration-service-20000-migrations-and-counting/

I first wrote about AWS Database Migration Service just about a year ago in my AWS Database Migration Service post. At that time I noted that over 1,000 AWS customers had already made use of the service as part of their move to AWS.

As a quick recap, AWS Database Migration Service and Schema Conversion Tool (SCT) help our customers migrate their relational data from expensive, proprietary databases and data warehouses (either on premises on in the cloud, but with restrictive licensing terms either way) to more cost-effective cloud-based databases and data warehouses such as Amazon Aurora, Amazon Redshift, MySQL, MariaDB, and PostgreSQL, with minimal downtime along the way. Our customers tell us that they love the flexibility and the cost-effective nature of these moves. For example, moving to Amazon Aurora gives them access to a database that is MySQL and PostgreSQL compatible, at 1/10th the cost of a commercial database. Take a peek at our AWS Database Migration Services Customer Testimonials to see how Expedia, Thomas Publishing, Pega, and Veoci have made use of the service.

20,000 Unique Migrations
I’m pleased to be able to announce that our customers have already used AWS Database Migration Service to migrate 20,000 unique databases to AWS and that the pace continues to accelerate (we reached 10,000 migrations in September of 2016).

We’ve added many new features to DMS and SCT over the past year. Here’s a summary:

Learn More
Here are some resources that will help you to learn more and to get your own migrations underway, starting with some recent webinars:

Migrating From Sharded to Scale-Up – Some of our customers implemented a scale-out strategy in order to deal with their relational workload, sharding their database across multiple independent pieces, each running on a separate host. As part of their migration, these customers often consolidate two or more shards onto a single Aurora instance, reducing complexity, increasing reliability, and saving money along the way. If you’d like to do this, check out the blog post, webinar recording, and presentation.

Migrating From Oracle or SQL Server to Aurora – Other customers migrate from commercial databases such as Oracle or SQL Server to Aurora. If you would like to do this, check out this presentation and the accompanying webinar recording.

We also have plenty of helpful blog posts on the AWS Database Blog:

  • Reduce Resource Consumption by Consolidating Your Sharded System into Aurora – “You might, in fact, save bunches of money by consolidating your sharded system into a single Aurora instance or fewer shards running on Aurora. That is exactly what this blog post is all about.”
  • How to Migrate Your Oracle Database to Amazon Aurora – “This blog post gives you a quick overview of how you can use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to facilitate and simplify migrating your commercial database to Amazon Aurora. In this case, we focus on migrating from Oracle to the MySQL-compatible Amazon Aurora.”
  • Cross-Engine Database Replication Using AWS Schema Conversion Tool and AWS Database Migration Service – “AWS SCT makes heterogeneous database migrations easier by automatically converting source database schema. AWS SCT also converts the majority of custom code, including views and functions, to a format compatible with the target database.”
  • Database Migration—What Do You Need to Know Before You Start? – “Congratulations! You have convinced your boss or the CIO to move your database to the cloud. Or you are the boss, CIO, or both, and you finally decided to jump on the bandwagon. What you’re trying to do is move your application to the new environment, platform, or technology (aka application modernization), because usually people don’t move databases for fun.”
  • How to Script a Database Migration – “You can use the AWS DMS console or the AWS CLI or the AWS SDK to perform the database migration. In this blog post, I will focus on performing the migration with the AWS CLI.”

The documentation includes five helpful walkthroughs:

There’s also a hands-on lab (you will need to register in order to participate).

See You at a Summit
The DMS team is planning to attend and present at many of our upcoming AWS Summits and would welcome the opportunity to discuss your database migration requirements in person.

Jeff;

 

 

UK Court Dismisses Case Against Torrent Site Proxy Operator

$
0
0

Post Syndicated from Ernesto original https://torrentfreak.com/uk-court-dismisses-case-against-torrent-site-proxy-operator-170307/

cityoflondonpoliceDuring the summer of 2014, City of London Police arrested then 20-year-old Callum Haywood of Bakersfield for his involvement with several proxy sites and services.

The investigation linked Haywood to Immunicity, a censorship circumvention tool that allowed users to route their traffic through a proxy network. In addition, he was also connected to the Pirate Bay proxy list Piratereverse.info plus several KickassTorrents and other proxy sites.

These proxies all served as a copy of the original sites, which are blocked by several UK ISPs, allowing users to bypass restrictions imposed by the High Court. While Haywood wasn’t operating any of the original sites, police decided to move ahead with the case anyway.

Following the arrest, progress was slow. It took nearly two years for the Police Intellectual Property Crime Unit (PIPCU) to formally announce charges, which amounted to one count of converting and/or transferring criminal property and six counts of possession of an article for use in fraud.

The charges related to the operation of a Pirate Bay proxy and two KickassTorrent proxies, and could’ve potentially landed the now 23-year-old a prison sentence of over ten years.

Haywood, however, denied any wrongdoing and after three dismissal hearings, his Honour Judge Dickinson QC of the Nottingham Crown Court agreed that the case should be dismissed. The initial dismissal was signed late last week, and after PIPCU chose not to appeal, the case is now over.

Piratereverse.info
piratereverse

No official paperwork has been released yet, but we were informed that the Court dismissed the case because of conflicting arguments that were presented during hearings last September and December.

The prosecution initially argued that the reverse proxy sites allowed users to make a fraudulent false representation to their ISP, by obscuring their IP-addresses. In a later hearing, however, they argued that Haywood was the one who made the false representation through his software.

The contradicting claims appear to demonstrate a lack of technical understanding on the prosecution’s side. In their September argument, they seemed to confuse a reverse proxy site with a forward proxy, which would indeed hide a user’s activity from an ISP.

In the December hearing, the prosecution made another error. In their attempt to explain what a reverse proxy server is, they relied on printouts from Wikipedia as official evidence. The judge wasn’t happy and stressed that it was unacceptable for the prosecution to submit clearly inadmissible evidence.

While Haywood is obviously pleased with the end result, the case took its toll. There was a looming uncertainty present for years, as well as the prospect of ending up in prison if the case went in the wrong direction.

“Two and a half years is a long time, I have gone from being an undergrad computer science student to graduating with a first class honours, and working as a software developer for a network appliance vendor,” Haywood informs TF.

“While I don’t think it has prevented me from achieving what I wanted, it has been a very difficult period of time for my family, and my friends. Having the case dismissed goes to show how the right decision was to plead not guilty – had I pleaded guilty, I would have been sentenced without contest.”

Haywood always maintained his innocence and in the end it paid off. He now hopes to leave the bad times behind and focus on the future. As for the authorities, he hopes that they will address real threats to society, instead of reverse proxy sites.

“I am pleased that it is over, as it was very frustrating. Everyone that I had discussed the case with who had a decent understanding of the technicalities was shocked that it had been allowed to get so far.

“It is also a disappointment how many resources were wasted in dealing with this case, when there are much more serious actual crimes on our streets,” Haywood concludes.

TorrentFreak contacted PIPCU for a comment, but we haven’t heard back at the time of publication.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Automating the Creation of Consistent Amazon EBS Snapshots with Amazon EC2 Systems Manager (Part 1)

$
0
0

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/automating-the-creation-of-consistent-amazon-ebs-snapshots-with-amazon-ec2-systems-manager-part-1/

Nicolas Malaval, AWS Professional Consultant

If an EC2 instance is up and running, there may be applications working, like databases, with data in memory or pending I/O operations that cannot be retrieved from an Amazon EBS snapshot. If your application is unable to recover from such a state, you might lose vital data for your business.

Amazon EBS provides block level storage volumes for use with EC2 instances. With EBS, you can create point-in-time snapshots of volumes, stored reliably on Amazon S3. If you rely on EBS snapshots as your backup solution and if you cannot turn off the instance during backup, you can create consistent EBS snapshots, which consists of informing the applications that they are about to be backed up so they can get prepared.

In this post, the first of a two-part series, I show you how to use Run Command and Maintenance Window, two features of Amazon EC2 Systems Manager, to automate the execution of scripts on EC2 instances that create consistent EBS snapshots. First, I explain the approach. Then, I walk you through a practical example to create consistent snapshots of an Amazon Linux EC2 instance running MySQL.

Creating consistent EBS snapshots with Run Command

Run Command lets you securely and remotely manage the configuration of Windows or Linux instances. For example, you can run scripts―or commands―without having to log on locally to the instance. Run Command requires the SSM Agent to be installed on the EC2 instances.

I use Run Command to run a script remotely on EC2 instances. The script coordinates the preparation of applications and the creation of EBS snapshots, as follows:

  1. It instructs the applications and the file system to flush their cached data to the disk and then to temporary block all I/O operations. At this moment, the EBS volume is in a consistent state.
  2. It retrieves the ID of the instance running the script using the Instance Metadata.
  3. It queries the EC2 API to obtain the ID of the EBS volumes attached to the instance, then to create a snapshot of each of the EBS volumes.
  4. Finally, it thaws I/O operations as soon as the EC2 API responds to the last request with a snapshot ID. It is not necessary to wait for the snapshot to complete.

The content of the script varies upon the system and the applications that should be prepared for backup. See the example sections later in this post.

Instances communicate with the Run Command API to retrieve commands to execute and return results, and with the EC2 API to get volume attachment information and create EBS snapshots. To grant permission to call the APIs, I launch the instances with an IAM role for EC2 instances. This role is attached to the SSM Managed Policy AmazonEC2RoleforSSM and to an inline policy which allows ec2:DescribeInstanceAttribute and ec2:CreateSnapshot actions.

Using Run Command has multiple benefits:

  • The scripts are maintained centrally and any changes are effective immediately on every instance
  • Commands are executed remotely and the instances continuously retrieve and run new commands
  • Status and results of each command execution are reported by Run Command and the information is also stored in AWS CloudTrail for audit purposes
  • Run Command is integrated with IAM to allow you to control both the users and level of access

Executing commands on a daily basis with Maintenance Windows

Maintenance Windows allows you to specify a recurring time window during which Run Command tasks are executed. I use Maintenance Windows to create consistent EBS snapshots on a daily basis during off-peak hours, because it may temporarily increase resource utilization and affect application performance.

The maintenance window is registered with multiple targets. Each target is a set of EC2 instances that have a tag “ConsistentSnapshot” assigned and an arbitrary value depending on what script to execute. Each target is registered with a task assigned to an SSM document, which describes the actions to perform by Run Command to create consistent EBS snapshots on every instance of this target.

Automating the creation of consistent EBS snapshots of an Amazon Linux instance running MySQL

Here’s a practical example to create consistent EBS snapshots of an Amazon Linux instance running MySQL, with step-by-step instructions.

Understanding the example

I use Run Command to execute a shell script on the Amazon Linux instance:

mysql -u backup -h localhost -e 'FLUSH TABLES WITH READ LOCK;'

First, the shell script prepares MySQL for backup. The command FLUSH TABLES WITH READ LOCK waits for the active transactions to complete, flushes the cache to the filesystem, and prevents clients from making write operations (see FLUSH in the MySQL documentation). You should note that this MySQL backup method implies a short interruption of write operations, and the duration depends on the current size and workload. You should make sure that the backup does not affect your applications.

sync

for target in $(findmnt -nlo TARGET -t ext4); do fsfreeze -f $target; done

It then suspends access to the filesystems and creates a stable image
on disk. At this stage, the EBS volume is in a consistent state.

instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id`
region=`curl -s 169.254.169.254/latest/meta-data/placement/availability-zone`
region=${region::-1}
volumes=`aws ec2 describe-instance-attribute --instance-id $instance --attribute blockDeviceMapping --output text --query BlockDeviceMappings[*].Ebs.VolumeId --region $region`

for volume in $(echo $volumes | tr " " "\n")
do aws ec2 create-snapshot --volume-id $volume --description 'Consistent snapshot of MySQL on Amazon Linux' --region $region > /dev/null 2>&1
done

It creates a snapshot of every EBS volume attached to the instance.

for target in $(findmnt -nlo TARGET -t ext4); do fsfreeze -u $target; done

mysql -u backup -h localhost -e 'UNLOCK TABLES;'

Finally, it resumes access to the filesystems and unlocks MySQL.

This shell script is contained in a new SSM document. The maintenance window executes a command from this document every day at midnight on every Linux instance that has a tag “ConsistentSnapshot” equal to “AmazonLinuxMySQL”.

Implementing and testing the example

First, use AWS CloudFormation to provision some of the required resources in your AWS account.

  1. Open Create a Stack to create a CloudFormation stack from the template.
  2. Follow the on-screen instructions.

CloudFormation creates the following resources:

  • A VPC with an Internet gateway attached
  • A subnet on this VPC with a new route table to enable access to the Internet and therefore to the AWS APIs
  • An IAM role to grant an EC2 instance the required permissions
  • An Amazon Linux instance in the subnet with the IAM role attached and the user data script entered to install and configure MySQL and the SSM Agent at launch
  • A SSM document containing the script described in the section earlier.
  • An IAM role to grant the maintenance window the required permissions

After the stack creation completes, choose Outputs in the CloudFormation console and note the values that the process returned:

  • IAM role for the maintenance window
  • Name of the SSM document

Manually create a maintenance window:

  1. In the Amazon EC2 console, choose Systems Manager Shared Resources, Maintenance Windows, Create a Maintenance Window.
  2. For Name, enter ConsistentSnapshots.
  3. For Specify with, choose CRON/Rate expression. For CRON/Rate expression, enter cron(0 0 * * ? *) in. This creates consistent EBS snapshots every day at midnight UTC.
  4. For Duration, enter 2 hours. For Stop initiating tasks, enter 0 hour.
  5. Choose Create maintenance window. The system returns you to the Maintenance Window page.

After you create a maintenance window, assign a target where the task will run:

  1. In the Maintenance Window list, select the maintenance window that you just created.
  2. For Actions, choose Register targets.
  3. For Owner information, enter AmazonLinuxMySQL.
  4. Under Select targets by section, choose Specifying tags. For Tag Name, choose ConsistentSnapshot. For Tag Value, choose AmazonLinuxMySQL.
  5. Choose Register targets.

Finally, assign a task to perform during the window:

  1. In the Maintenance Window list, select the maintenance window that you just created.
  2. For Actions, choose Register tasks.
  3. For Document, select the SSM document that was returned by CloudFormation.
  4. Under the Target by section, select the target that you just created.
  5. Under the Role section, select the IAM role that was returned by CloudFormation.
  6. Under the Execute on section, for Targets, enter 1. For Stop after, enter 1 errors. You can adapt these numbers to your own needs.
  7. Choose Register task.

You can view the history either in the History tab of the Maintenance Windows navigation pane of the Amazon EC2 console, as illustrated on the following figure, or in the Run Command navigation pane, with more details about each command executed.

ebssnapshotautomation_1.png

Conclusion

In this post, I showed how you can use Amazon EC2 Systems Manager to create consistent EBS snapshots on a daily basis, with a practical example for MySQL running in an Amazon Linux instance.

In the next part of this two-part series, I walk you through another example to create consistent snapshots of a Windows Server instance with Microsoft VSS (Volume Shadow Copy Service).

If you have questions or suggestions, please comment below.

International Women’s Day: Girls at Code Club

$
0
0

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/international-womens-day-2017/

On International Women’s Day and every day, Raspberry Pi and Code Club are determined to support girls and women to fulfil their potential in the field of computing.

Code Club provides computing opportunities for kids aged nine to eleven within their local communities, and 40 percent of the children attending our 5000-plus UK clubs are girls. Code Club aims to inspire them to get excited about computer science and digital making, and to help them develop the skills and knowledge to succeed.

Big Birthday Bash Code Club Raspberry Pi Bag

Code Club’s broad appeal

From the very beginning, Code Club was designed to appeal equally to girls and boys. Co-founder Clare Sutcliffe describes how she took care to avoid anything that evoked gendered stereotypes:

When I was first designing Code Club – its brand, tone of voice and content – it was all with a gender-neutral feel firmly in mind. Anything that felt too gendered was ditched.

The resources that children use are selected to have broad appeal, engaging a wide range of interests. Code Club’s hosts and volunteers provide an environment that is welcoming and supportive.

Two girls coding at Code Club

A crucial challenge for the future is to sustain an interest in computing in girls as they enter their teenage years. As in other areas of science, technology, engineering and maths; early success for girls doesn’t yet feed through into pursuing higher qualifications or entering related careers in large numbers. What can we all do to make sure that interested and talented young women know that this exciting field is for them?

The post International Women’s Day: Girls at Code Club appeared first on Raspberry Pi.

Big Updates to the Big Data on AWS Training Course!

$
0
0

Post Syndicated from Sara Snedeker original https://aws.amazon.com/blogs/big-data/big-updates-to-the-big-data-on-aws-training-course/

AWS offers a range of training resources to help you advance your knowledge with practical skills so you can get more out of the cloud. We’ve updated Big Data on AWS, a three-day, instructor-led training course to keep pace with the latest AWS big data innovations. This course allows you to hear big data best practices from an expert, get answers to your questions in person, and get hands-on practice using AWS big data services. Anyone interested in learning about the services and architecture patterns behind big data solutions on AWS will benefit from this training.

Specifically, this course introduces you to cloud-based big data solutions such as Amazon EMR, Amazon Redshift, Amazon Kinesis and the rest of the AWS big data platform. This course shows you how to use Amazon EMR to process data using the broad ecosystem of Hadoop tools like Hive and Hue. We also teach you how to create big data environments, work with Amazon DynamoDB, Amazon Redshift, Amazon Quicksight, Amazon Athena and Amazon Kinesis, and leverage best practices to design big data environments for security and cost-effectiveness.

This new version of the course incorporates feedback and adds new content. There’s a new module around Big Data Processing and Analytics that focuses on Amazon Athena. We’ve also updated the course with more context for IoT, more content for Kinesis Firehose, new content for Kinesis Analytics and Amazon Snowball, and added content for Amazon QuickSight.

If you’re interested in this course, you can search for a local Big Data on AWS class in our Global Class Schedule. Or, if you’d like to arrange a private onsite class for your team, you can contact us about scheduling. You can also explore other training courses on our Classes & Workshops page.

Tech_Trainer_1

How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials

$
0
0

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-access-the-aws-management-console-using-aws-microsoft-ad-and-your-on-premises-credentials/

AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML).

In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

Background

AWS customers use on-premises AD to administer user accounts, manage group memberships, and control access to on-premises resources. If you are like many AWS Microsoft AD customers, you also might want to enable your users to sign in to the AWS Management Console using on-premises AD credentials to manage AWS resources such as Amazon EC2, Amazon RDS, and Amazon S3.

Enabling such sign-in permissions has four key benefits:

  1. Your on-premises AD group administrators can now manage access to AWS resources with standard AD administration tools instead of IAM.
  2. Your users need to remember only one identity to sign in to AD and the AWS Management Console.
  3. Because users sign in with their on-premises AD credentials, access to the AWS Management Console benefits from your AD-enforced password policies.
  4. When you remove a user from AD, AWS Microsoft AD and IAM automatically revoke their access to AWS resources.

IAM roles provide a convenient way to define permissions to manage AWS resources. By using an AD trust between AWS Microsoft AD and your on-premises AD, you can assign your on-premises AD users and groups to IAM roles. This gives the assigned users and groups the IAM roles’ permissions to manage AWS resources. By assigning on-premises AD groups to IAM roles, you can now manage AWS access through standard AD administrative tools such as AD Users and Computers (ADUC).

After you assign your on-premises users or groups to IAM roles, your users can sign in to the AWS Management Console with their on-premises AD credentials. From there, they can select from a list of their assigned IAM roles. After they select a role, they can perform the management functions that you assigned to the IAM role.

In the rest of this post, I show you how to accomplish this in four steps:

  1. Create an access URL.
  2. Enable AWS Management Console access.
  3. Assign on-premises users and groups to IAM roles.
  4. Connect to the AWS Management Console.

Prerequisites

The instructions in this blog post require you to have the following components running:

Note: You can assign IAM roles to user identities stored in AWS Microsoft AD. For this post, I focus on assigning IAM roles to user identities stored in your on-premises AD. This requires a forest trust relationship between your on-premises Active Directory and your AWS Microsoft AD directory.

Solution overview

For the purposes of this post, I am the administrator who manages both AD and IAM roles in my company. My company wants to enable all employees to use on-premises credentials to sign in to the AWS Management Console to access and manage their AWS resources. My company uses EC2, RDS, and S3. To manage administrative permissions to these resources, I created a role for each service that gives full access to the service. I named these roles EC2FullAccess, RDSFullAccess, and S3FullAccess.

My company has two teams with different responsibilities, and we manage users in AD security groups. Mary is a member of the DevOps security group and is responsible for creating and managing our RDS databases, running data collection applications on EC2, and archiving information in S3. John and Richard are members of the BIMgrs security group and use EC2 to run analytics programs against the database. Though John and Richard need access to the database and archived information, they do not need to operate those systems. They do need permission to administer their own EC2 instances.

To grant appropriate access to the AWS resources, I need to assign the BIMgrs security group in AD to the EC2FullAccess role in IAM, and I need to assign the DevOps group to all three roles (EC2FullAccess, RDSFullAccess, and S3FullAccess). Also, I want to make sure all our employees have adequate time to complete administrative actions after signing in to the AWS Management Console, so I increase the console session timeout from 60 minutes to 240 minutes (4 hours).

The following diagram illustrates the relationships between my company’s AD users and groups and my company’s AWS roles and services. The left side of the diagram represents my on-premises AD that contains users and groups. The right side represents the AWS Cloud that contains the AWS Management Console, AWS resources, IAM roles, and our AWS Microsoft AD directory connected to our on-premises AD via a forest trust relationship.

NEWDiagram-VijayS-a

Let’s get started with the steps for this scenario. For this post, I have already created an AWS Microsoft AD directory and established a two-way forest trust from AWS Microsoft AD to my on-premises AD. To manage access to AWS resources, I have also created the following IAM roles:

  • EC2FullAccess: Provides full access to EC2 and has the AmazonEC2FullAccess AWS managed policy attached.
  • RDSFullAccess: Provides full access to RDS via the AWS Management Console and has the AmazonRDSFullAccess managed policy attached.
  • S3FullAccess: Provides full access to S3 via the AWS Management Console and has the AmazonS3FullAccess managed policy attached.

To learn more about how to create IAM roles and attach managed policies, see Attaching Managed Policies.

Note: You must include a Directory Service trust policy on all roles that require access by users who sign in to the AWS Management Console using Microsoft AD. To learn more, see Editing the Trust Relationship for an Existing Role.

Step 1 – Create an access URL

The first step to enabling access to the AWS Management Console is to create a unique Access URL for your AWS Microsoft AD directory. An Access URL is a globally unique URL. AWS applications, such as the AWS Management Console, use the URL to connect to the AWS sign-in page that is linked to your AWS Microsoft AD directory. The Access URL does not provide any other access to your directory. To learn more about Access URLs, see Creating an Access URL.

Follow these steps to create an Access URL:

  1. Navigate to the Directory Service Console and choose your AWS Microsoft AD Directory ID.
  2. On the Directory Details page, choose the Apps & Services tab, type a unique access alias in the Access URL box, and then choose Create Access URL to create an Access URL for your directory.
    Screenshot of creating an Access URL

Your directory Access URL should be in the following format: <access-alias>.awsapps.com. In this example, I am using https://example-corp.awsapps.com.

Step 2 – Enable AWS Management Console access

To allow users to sign in to AWS Management Console with their on-premises credentials, you must enable AWS Management Console access for your AWS Microsoft AD directory:

  1. From the Directory Service console, choose your AWS Microsoft AD Directory ID. Choose the AWS Management Console link in the AWS apps & services section.
    Screenshot of choosing the AWS Management Console link
  2. In the Enable AWS Management Console dialog box, choose Enable Access to enable console access for your directory.
    Screenshot of choosing Enable Access

This enables AWS Management Console access for your AWS Microsoft AD directory and provides you a URL that you can use to connect to the console. The URL is generated by appending “/console” to the end of the access URL that you created in Step 1: <access-alias>.awsapps.com/console. In this example, the AWS Management Console URL is https://example-corp.awsapps.com/console.
Screenshot of the URL to connect to the console

Step 3 – Assign on-premises users and groups to IAM roles

Before you users can use your Access URL to sign in to the AWS Management Console, you need to assign on-premises users or groups to IAM roles. This critical step enables you to control which AWS resources your on-premises users and groups can access from the AWS Management Console.

In my on-premises Active Directory, Mary is already a member of the DevOps group, and John and Richard are members of the BIMgrs group. I already set up the trust from AWS Microsoft AD to my on-premises AD, and I already created the EC2FullAccess, RDSFullAccess, and S3FullAccess roles that I will use.

I am now ready to assign on-premises groups to IAM roles. I do this by assigning the DevOps group to the EC2FullAccess, RDSFullAccess, and S3FullAccess IAM roles, and the BIMgrs group to the EC2FullAccess IAM role. Follow these steps to assign on-premises groups to IAM roles:

  1. Open the Directory Service details page of your AWS Microsoft AD directory and choose the AWS Management Console link on the Apps & services tab. Choose Continue to navigate to the Add Users and Groups to Roles page.
    Screenshot of Manage access to AWS Resources dialog box
  2. On the Add Users and Groups to Roles page, I see the three IAM roles that I have already configured (shown in the following screenshot). If you do not have any IAM roles with a Directory Service trust policy enabled, you can create new roles or enable Directory Service for existing roles.
  3. I will now assign the on-premises DevOps and BIMgrs groups to the EC2FullAccess role. To do so, I choose the EC2FullAccess IAM role link to navigate to the Role Detail page. Next, I choose the Add button to assign users or groups to the role, as shown in the following screenshot.
  4. In the Add Users and Groups to Role pop-up window, I select the on-premises Active Directory forest that contains the users and groups to assign. In this example, that forest is amazondomains.comNote: If you do not use a trust to an on-premises AD and you create users and groups in your AWS Microsoft AD directory, you can choose the default this forest to search for users in Microsoft AD.
  5. To assign an Active Directory group, choose the Group filter above the Search for field. Type the name of the Active Directory group in the search box and choose the search button (the magnifying glass). You can see that I was able to search for the DevOps group from my on-premises Active Directory.
  6. In this case, I added the on-premises groups, DevOps and BIMgrs, to the EC2FullAccess role. When finished, choose the Add button to assign users and groups to the IAM role. You have now successfully granted DevOps and BIMgrs on-premises AD groups full access to EC2. Users in these AD groups can now sign in to AWS Management Console using their on-premises credentials and manage EC2 instances.

From the Add Users and Groups to Roles page, I repeat the process to assign the remaining groups to the IAM roles. In the following screenshot, you can see that I have assigned the DevOps group to three roles and the BIMgrs group to only one role.

With my AD security groups assigned to my IAM roles, I can now add and delete on-premises users to the security groups to grant or revoke permissions to the IAM roles. Users in these security groups have access to all of their assigned roles.

  1. You can optionally set the login session length for your AWS Microsoft AD directory. The default length is 1 hour, but you can increase it up to 12 hours. In my example, I set the console session time to 240 minutes (4 hours).

Step 4 – Connect to the AWS Management Console

I am now ready for my users to sign in to the AWS Management Console with their on-premises credentials. I emailed my users the access URL I created in Step 2: https://example-corp.awsapps.com/console. Now my users can go to the URL to sign in to the AWS Management Console.

When Mary, who is a member of DevOps group, goes to the access URL, she sees a sign-in page to connect to the AWS Management Console. In the Username box, she can enter her sign-in name in three different ways:

Because the DevOps group is associated with three IAM roles, and because Mary is in the DevOps group, she can choose the role she wants from the list presented after she successfully logs in. The following screenshot shows this step.

If you also would like to secure the AWS Management Console with multi-factor authentication (MFA), you can add MFA to your AWS Microsoft AD configuration. To learn more about enabling MFA on Microsoft AD, see How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials.

Summary

AWS Microsoft AD makes it easier for you to connect to the AWS Management Console by using your on-premises credentials. It also enables you to reuse your on-premises AD security policies such as password expiration, password history, and account lockout policies while still controlling access to AWS resources.

To learn more about Directory Service, see the AWS Directory Service home page. If you have questions about this blog post, please start a new thread on the Directory Service forum.

– Vijay

Utopia

$
0
0

Post Syndicated from Eevee original https://eev.ee/blog/2017/03/08/utopia/

It’s been a while, but someone’s back on the Patreon blog topic tier! IndustrialRobot asks:

What does your personal utopia look like? Do you think we (as mankind) can achieve it? Why/why not?

Hm.

I spent the month up to my eyeballs in a jam game, but this question was in the back of my mind a lot. I could use it as a springboard to opine about anything, especially in the current climate: politics, religion, nationalism, war, economics, etc., etc. But all of that has been done to death by people who actually know what they’re talking about.

The question does say “personal”. So in a less abstract sense… what do I want the world to look like?

Mostly, I want everyone to have the freedom to make things.

I’ve been having a surprisingly hard time writing the rest of this without veering directly into the ravines of “basic income is good” and “maybe capitalism is suboptimal”. Those are true, but not really the tone I want here, and anyway they’ve been done to death by better writers than I. I’ve talked this out with Mel a few times, and it sounds much better aloud, so I’m going to try to drop my Blog Voice and just… talk.

*ahem*

Art versus business

So, art. Art is good.

I’m construing “art” very broadly here. More broadly than “media”, too. I’m including shitty robots, weird Twitter almost-bots, weird Twitter non-bots, even a great deal of open source software. Anything that even remotely resembles creative work — driven perhaps by curiosity, perhaps by practicality, but always by a soul bursting with ideas and a palpable need to get them out.

Western culture thrives on art. Most culture thrives on art. I’m not remotely qualified to defend this, but I suspect you could define culture in terms of art. It’s pretty important.

You’d think this would be reflected in how we discuss art, but often… it’s not. Tell me how often you’ve heard some of these gems.

  • I could do that.”
  • My eight-year-old kid could do that.”
  • Jokes about the worthlessness of liberal arts degrees.
  • Jokes about people trying to write novels in their spare time, the subtext being that only dreamy losers try to write novels, or something.
  • The caricature of a hippie working on a screenplay at Starbucks.

Oh, and then there was the guy who made a bot to scrape tons of art from artists who were using Patreon as a paywall — and a primary source of income. The justification was that artists shouldn’t expect to make a living off of, er, doing art, and should instead get “real jobs”.

I do wonder. How many of the people repeating these sentiments listen to music, or go to movies, or bought an iPhone because it’s prettier? Are those things not art that took real work to create? Is creating those things not a “real job”?

Perhaps a “real job” has to be one that’s not enjoyable, not a passion? And yet I can’t recall ever hearing anyone say that Taylor Swift should get a “real job”. Or that, say, pro football players should get “real jobs”. What do pro football players even do? They play a game a few times a year, and somehow this drives the flow of unimaginable amounts of money. We dress it up in the more serious-sounding “sport”, but it’s a game in the same general genre as hopscotch. There’s nothing wrong with that, but somehow it gets virtually none of the scorn that art does.

Another possible explanation is America’s partly-Christian, partly-capitalist attitude that you deserve exactly whatever you happen to have at the moment. (Whereas I deserve much more and will be getting it any day now.) Rich people are rich because they earned it, and we don’t question that further. Poor people are poor because they failed to earn it, and we don’t question that further, either. To do so would suggest that the system is somehow unfair, and hard work does not perfectly correlate with any particular measure of success.

I’m sure that factors in, but it’s not quite satisfying: I’ve also seen a good deal of spite aimed at people who are making a fairly decent chunk through Patreon or similar. Something is missing.

I thought, at first, that the key might be the American worship of work. Work is an inherent virtue. Politicians run entire campaigns based on how many jobs they’re going to create. Notably, no one seems too bothered about whether the work is useful, as long as someone decided to pay you for it.

Finally I stumbled upon the key. America doesn’t actually worship work. America worships business. Business means a company is deciding to pay you. Business means legitimacy. Business is what separates a hobby from a career.

And this presents a problem for art.

If you want to provide a service or sell a product, that’ll be hard, but America will at least try to look like it supports you. People are impressed that you’re an entrepreneur, a small business owner. Politicians will brag about policies made in your favor, whether or not they’re stabbing you in the back.

Small businesses have a particular structure they can develop into. You can divide work up. You can have someone in sales, someone in accounting. You can provide specifications and pay a factory to make your product. You can defer all of the non-creative work to someone else, whether that means experts in a particular field or unskilled labor.

But if your work is inherently creative, you can’t do that. The very thing you’re making is your idea in your style, driven by your experience. This is not work that’s readily parallelizable. Even if you sell physical merchandise and register as an LLC and have a dedicated workspace and do various other formal business-y things, the basic structure will still look the same: a single person doing the thing they enjoy. A hobbyist.

Consider the bulleted list from above. Those are all individual painters or artists or authors or screenwriters. The kinds of artists who earn respect without question are generally those managed by a business, those with branding: musical artists signed to labels, actors working for a studio. Even football players are part of a tangle of business.

(This doesn’t mean that business automatically confers respect, of course; tech in particular is full of anecdotes about nerds’ disdain for people whose jobs are design or UI or documentation or whathaveyou. But a businessy look seems to be a significant advantage.)

It seems that although art is a large part of what informs culture, we have a culture that defines “serious” endeavors in such a way that independent art cannot possibly be “serious”.

Art versus money

Which wouldn’t really matter at all, except that we also have a culture that expects you to pay for food and whatnot.

The reasoning isn’t too outlandish. Food is produced from a combination of work and resources. In exchange for getting the food, you should give back some of your own work and resources.

Obviously this is riddled with subtle flaws, but let’s roll with it for now and look at a case study. Like, uh, me!

Mel and I built and released two games together in the six weeks between mid-January and the end of February. Together, those games have made $1,000 in sales. The sales trail off fairly quickly within a few days of release, so we’ll call that the total gross for our effort.

I, dumb, having never actually sold anything before, thought this was phenomenal. Then I had the misfortune of doing some math.

Itch takes at least 10%, so we’re down to $900 net. Divided over six weeks, that’s $150 per week, before taxes — or $3.75 per hour if we’d been working full time.

Ah, but wait! There are two of us. And we hadn’t been working full time — we’d been working nearly every waking hour, which is at least twice “full time” hours. So we really made less than a dollar an hour. Even less than that, if you assume overtime pay.

From the perspective of capitalism, what is our incentive to do this? Between us, we easily have over thirty years of experience doing the things we do, and we spent weeks in crunch mode working on something, all to earn a small fraction of minimum wage. Did we not contribute back our own work and resources? Was our work worth so much less than waiting tables?

Waiting tables is a perfectly respectable way to earn a living, mind you. Ah, but wait! I’ve accidentally done something clever here. It is generally expected that you tip your waiter, because waiters are underpaid by the business, because the business assumes they’ll be tipped. Not tipping is actually, almost impressively, one of the rudest things you can do. And yet it’s not expected that you tip an artist whose work you enjoy, even though many such artists aren’t being paid at all.

Now, to be perfectly fair, both games were released for free. Even a dollar an hour is infinitely more than the zero dollars I was expecting — and I’m amazed and thankful we got as much as we did! Thank you so much. I bring it up not as a complaint, but as an armchair analysis of our systems of incentives.

People can take art for granted and whatever, yes, but there are several other factors at play here that hamper the ability for art to make money.

For one, I don’t want to sell my work. I suspect a great deal of independent artists and writers and open source developers (!) feel the same way. I create things because I want to, because I have to, because I feel so compelled to create that having a non-creative full-time job was making me miserable. I create things for the sake of expressing an idea. Attaching a price tag to something reduces the number of people who’ll experience it. In other words, selling my work would make it less valuable in my eyes, in much the same way that adding banner ads to my writing would make it less valuable.

And yet, I’m forced to sell something in some way, or else I’ll have to find someone who wants me to do bland mechanical work on their ideas in exchange for money… at the cost of producing sharply less work of my own. Thank goodness for Patreon, at least.

There’s also the reverse problem, in that people often don’t want to buy creative work. Everyone does sometimes, but only sometimes. It’s kind of a weird situation, and the internet has exacerbated it considerably.

Consider that if I write a book and print it on paper, that costs something. I have to pay for the paper and the ink and the use of someone else’s printer. If I want one more book, I have to pay a little more. I can cut those costs pretty considerable by printing a lot of books at once, but each copy still has a price, a marginal cost. If I then gave those books away, I would be actively losing money. So I can pretty well justify charging for a book.

Along comes the internet. Suddenly, copying costs nothing. Not only does it cost nothing, but it’s the fundamental operation. When you download a file or receive an email or visit a web site, you’re really getting a copy! Even the process which ultimately shows it on your screen involves a number of copies. This is so natural that we don’t even call it copying, don’t even think of it as copying.

True, bandwidth does cost something, but the rate is virtually nothing until you start looking at very big numbers indeed. I pay $60/mo for hosting this blog and a half dozen other sites — even that’s way more than I need, honestly, but downgrading would be a hassle — and I get 6TB of bandwidth. Even the longest of my posts haven’t exceeded 100KB. A post could be read by 64 million people before I’d start having a problem. If that were the population of a country, it’d be the 23rd largest in the world, between Italy and the UK.

How, then, do I justify charging for my writing? (Yes, I realize the irony in using my blog as an example in a post I’m being paid $88 to write.)

Well, I do pour effort and expertise and a fraction of my finite lifetime into it. But it doesn’t cost me anything tangible — I already had this hosting for something else! — and it’s easier all around to just put it online.

The same idea applies to a vast bulk of what’s online, and now suddenly we have a bit of a problem. Not only are we used to getting everything for free online, but we never bothered to build any sensible payment infrastructure. You still have to pay for everything by typing in a cryptic sequence of numbers from a little physical plastic card, which will then give you a small loan and charge the seller 30¢ plus 2.9% for the “convenience”.

If a website could say “pay 5¢ to read this” and you clicked a button in your browser and that was that, we might be onto something. But with our current setup, it costs far more than 5¢ to transfer 5¢, even though it’s just a number in a computer somewhere. The only people with the power and resources to fix this don’t want to fix it — they’d rather be the ones charging you the 30¢ plus 2.9%.

That leads to another factor of platforms and publishers, which are more than happy to eat a chunk of your earnings even when you do sell stuff. Google Play, the App Store, Steam, and anecdotally many other big-name comparative platforms all take 30% of your sales. A third! And that’s good! It seems common among book publishers to take 85% to 90%. For ebook sales — i.e., ones that don’t actually cost anything — they may generously lower that to a mere 75% to 85%.

Bless Patreon for only taking 5%. Itch.io is even better: it defaults to 10%, but gives you a slider, which you can set to anything from 0% to 100%.

I’ve mentioned all this before, so here’s a more novel thought: finite disposable income. Your audience only has so much money to spend on media right now. You can try to be more compelling to encourage them to spend more of it, rather than saving it, but ultimately everyone has a limit before they just plain run out of money.

Now, popularity is heavily influenced by social and network effects, so it tends to create a power law distribution: a few things are ridiculously hyperpopular, and then there’s a steep drop to a long tail of more modestly popular things.

If a new hyperpopular thing comes out, everyone is likely to want to buy it… but then that eats away a significant chunk of that finite pool of money that could’ve gone to less popular things.

This isn’t bad, and buying a popular thing doesn’t make you a bad person; it’s just what happens. I don’t think there’s any satisfying alternative that doesn’t involve radically changing the way we think about our economy.

Taylor Swift, who I’m only picking on because her infosec account follows me on Twitter, has sold tens of millions of albums and is worth something like a quarter of a billion dollars. Does she need more? If not, should she make all her albums free from now on?

Maybe she does, and maybe she shouldn’t. The alternative is for someone to somehow prevent her from making more money, which doesn’t sit well. Yet it feels almost heretical to even ask if someone “needs” more money, because we take for granted that she’s earned it — in part by being invested in by a record label and heavily advertised. The virtue is work, right? Don’t a lot of people work just as hard? (“But you have to be talented too!” Then please explain how wildly incompetent CEOs still make millions, and leave burning businesses only to be immediately hired by new ones? Anyway, are we really willing to bet there is no one equally talented but not as popular by sheer happenstance?)

It’s kind of a moot question anyway, since she’s probably under contract with billionaires and it’s not up to her.

Where the hell was I going with this.


Right, so. Money. Everyone needs some. But making it off art can be tricky, unless you’re one of the lucky handful who strike gold.

And I’m still pretty goddamn lucky to be able to even try this! I doubt I would’ve even gotten into game development by now if I were still working for an SF tech company — it just drained so much of my creative energy, and it’s enough of an uphill battle for me to get stuff done in the first place.

How many people do I know who are bursting with ideas, but have to work a tedious job to keep the lights on, and are too tired at the end of the day to get those ideas out? Make no mistake, making stuff takes work — a lot of it. And that’s if you’re already pretty good at the artform. If you want to learn to draw or paint or write or code, you have to do just as much work first, with much more frustration, and not as much to show for it.

Utopia

So there’s my utopia. I want to see a world where people have the breathing room to create the things they dream about and share them with the rest of us.

Can it happen? Maybe. I think the cultural issues are a fairly big blocker; we’d be much better off if we treated independent art with the same reverence as, say, people who play with a ball for twelve hours a year. Or if we treated liberal arts degrees as just as good as computer science degrees. (“But STEM can change the world!” Okay. How many people with computer science degrees would you estimate are changing the world, and how many are making a website 1% faster or keeping a lumbering COBOL beast running or trying to trick 1% more people into clicking on ads?)

I don’t really mean stuff like piracy, either. Piracy is a thing, but it’s… complicated. In my experience it’s not even artists who care the most about piracy; it’s massive publishers, the sort who see artists as a sponge to squeeze money out of. You know, the same people who make everything difficult to actually buy, infest it with DRM so it doesn’t work on half the stuff you own, and don’t even sell it in half the world.

I mean treating art as a free-floating commodity, detached from anyone who created it. I mean neo-Nazis adopting a comic book character as their mascot, against the creator’s wishes. I mean politicians and even media conglomerates using someone else’s music in well-funded videos and ads without even asking. I mean assuming Google Image Search, wonder that it is, is some kind of magical free art machine. I mean the snotty Reddit post I found while looking up Patreon’s fee structure, where some doofus was insisting that Patreon couldn’t possibly pay for a full-time YouTuber’s time, because not having a job meant they had lots of time to spare.

Maybe I should go one step further: everyone should create at least once or twice. Everyone should know what it’s like to have crafted something out of nothing, to be a fucking god within the microcosm of a computer screen or a sewing machine or a pottery table. Everyone should know that spark of inspiration that we don’t seem to know how to teach in math or science classes, even though it’s the entire basis of those as well. Everyone should know that there’s a good goddamn reason I listed open source software as a kind of art at the beginning of this post.

Basic income and more arts funding for public schools. If Uber can get billions of dollars for putting little car icons on top of Google Maps and not actually doing any of their own goddamn service themselves, I think we can afford to pump more cash into webcomics and indie games and, yes, even underwater basket weaving.


Automating the Creation of Consistent Amazon EBS Snapshots with Amazon EC2 Systems Manager (Part 2)

$
0
0

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/automating-the-creation-of-consistent-amazon-ebs-snapshots-with-amazon-ec2-systems-manager-part-2/

Nicolas Malaval, AWS Professional Consultant

In my previous blog post, I discussed the challenge of creating Amazon EBS snapshots when you cannot turn off the instance during backup because this might exclude any data that has been cached by any applications or the operating system. I showed how you can use EC2 Systems Manager to run a script remotely on EC2 instances to prepare the applications and the operating system for backup and to automate the creating of snapshots on a daily basis. I gave a practical example of creating consistent Amazon EBS snapshots of Amazon Linux running a MySQL database.

In this post, I walk you through another practical example to create consistent snapshots of a Windows Server instance with Microsoft VSS (Volume Shadow Copy Service).

Understanding the example

VSS (Volume Shadow Copy Service) is a Windows built-in service that coordinates backup of VSS-compatible applications (SQL Server, Exchange Server, etc.) to flush and freeze their I/O operations.

The VSS service initiates and oversees the creation of shadow copies. A shadow copy is a point-in-time and consistent snapshot of a logical volume. For example, C: is a logical volume, which is different than an EBS snapshot. Multiple components are involved in the shadow copy creation:

  • The VSS requester requests the creation of shadow copies.
  • The VSS provider creates and maintains the shadow copies.
  • The VSS writers guarantee that you have a consistent data set to back up. They flush and freeze I/O operations, before the VSS provider creates the shadow copies, and release I/O operations, after the VSS provider has completed this action. There is usually one VSS writer for each VSS-compatible application.

I use Run Command to execute a PowerShell script on the Windows instance:

$EbsSnapshotPsFileName = "C:/tmp/ebsSnapshot.ps1"

$EbsSnapshotPs = New-Item -Type File $EbsSnapshotPsFileName -Force

Add-Content $EbsSnapshotPs '$InstanceID = Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/instance-id'
Add-Content $EbsSnapshotPs '$AZ = Invoke-RestMethod -Uri http://169.254.169.254/latest/meta-data/placement/availability-zone'
Add-Content $EbsSnapshotPs '$Region = $AZ.Substring(0, $AZ.Length-1)'
Add-Content $EbsSnapshotPs '$Volumes = ((Get-EC2InstanceAttribute -Region $Region -Instance "$InstanceId" -Attribute blockDeviceMapping).BlockDeviceMappings.Ebs |? {$_.Status -eq "attached"}).VolumeId'
Add-Content $EbsSnapshotPs '$Volumes | New-EC2Snapshot -Region $Region -Description " Consistent snapshot of a Windows instance with VSS" -Force'
Add-Content $EbsSnapshotPs 'Exit $LastExitCode'

First, the script writes in a local file named ebsSnapshot.ps1 a PowerShell script that creates a snapshot of every EBS volume attached to the instance.

$EbsSnapshotCmdFileName = "C:/tmp/ebsSnapshot.cmd"
$EbsSnapshotCmd = New-Item -Type File $EbsSnapshotCmdFileName -Force

Add-Content $EbsSnapshotCmd 'powershell.exe -ExecutionPolicy Bypass -file $EbsSnapshotPsFileName'
Add-Content $EbsSnapshotCmd 'exit $?'

It writes in a second file named ebsSnapshot.cmd a shell script that executes the PowerShell script created earlier.

$VssScriptFileName = "C:/tmp/scriptVss.txt"
$VssScript = New-Item -Type File $VssScriptFileName -Force

Add-Content $VssScript 'reset'
Add-Content $VssScript 'set context persistent'
Add-Content $VssScript 'set option differential'
Add-Content $VssScript 'begin backup'

$Drives = Get-WmiObject -Class Win32_LogicalDisk |? {$_.VolumeName -notmatch "Temporary" -and $_.DriveType -eq "3"} | Select-Object DeviceID

$Drives | ForEach-Object { Add-Content $VssScript $('add volume ' + $_.DeviceID + ' alias Volume' + $_.DeviceID.Substring(0, 1)) }

Add-Content $VssScript 'create'
Add-Content $VssScript "exec $EbsSnapshotCmdFileName"
Add-Content $VssScript 'end backup'

$Drives | ForEach-Object { Add-Content $VssScript $('delete shadows id %Volume' + $_.DeviceID.Substring(0, 1) + '%') }

Add-Content $VssScript 'exit'

It creates a third file named scriptVss.txt containing DiskShadow commands. DiskShadow is a tool included in Windows Server 2008 and above, that exposes the functionality offered by the VSS service. The script creates a shadow copy of each logical volume stored on EBS, runs the shell script ebsSnapshot.cmd to create a snapshot of underlying EBS volumes, and then deletes the shadow copies to free disk space.

diskshadow.exe /s $VssScriptFileName
Exit $LastExitCode

Finally, it runs DiskShadow in script mode.

This PowerShell script is contained in a new SSM document and the maintenance window executes a command from this document every day at midnight on every Windows instance that has a tag “ConsistentSnapshot” equal to “WindowsVSS”.

Implementing and testing the example

First, use AWS CloudFormation to provision some of the required resources in your AWS account.

  1. Open Create a Stack to create a CloudFormation stack from the template.
  2. Choose Next.
  3. Enter the ID of the latest AWS Windows Server 2016 Base AMI available in the current region (see Finding a Windows AMI) in pWindowsAmiId.
  4. Follow the on-screen instructions.

CloudFormation creates the following resources:

  • A VPC with an Internet gateway attached.
  • A subnet on this VPC with a new route table, to enable access to the Internet and therefore to the AWS APIs.
  • An IAM role to grant an EC2 instance the required permissions.
  • A security group that allows RDP access from the Internet, as you need to log on to the instance later on.
  • A Windows instance in the subnet with the IAM role and the security group attached.
  • A SSM document containing the script described in the section above to create consistent EBS snapshots.
  • Another SSM document containing a script to restore logical volumes to a consistent state, as explained in the next section.
  • An IAM role to grant the maintenance window the required permissions.

After the stack creation completes, choose Outputs in the CloudFormation console and note the values returned:

  • IAM role for the maintenance window
  • Names of the two SSM documents

Then, manually create a maintenance window, if you have not already created it. For detailed instructions, see the “Example” section in the previous blog post.

After you create a maintenance window, assign a target where the task will run:

  1. In the Maintenance Window list, choose the maintenance window that you just created.
  2. For Actions, choose Register targets.
  3. For Owner information, enter WindowsVSS.
  4. Under the Select targets by section, choose Specifying tags. For Tag Name, choose ConsistentSnapshot. For Tag Value, choose WindowsVSS.
  5. Choose Register targets.

Finally, assign a task to perform during the window:

  1. In the Maintenance Window list, choose the maintenance window that you just created.
  2. For Actions, choose Register tasks.
  3. For Document, select the name of the SSM document that was returned by CloudFormation, with which to create snapshots.
  4. Under the Target by section, choose the target that you just created.
  5. Under the Role section, select the IAM role that was returned by CloudFormation.
  6. Under Execute on, for Targets, enter 1. For Stop after, enter 1 errors.
  7. Choose Register task.

You can view the history either in the History tab of the Maintenance Windows navigation pane of the Amazon EC2 console, as illustrated on the following figure, or in the Run Command navigation pane, with more details about each command executed.

Restoring logical volumes to a consistent state

DiskShadow―the VSS requester in this case―uses the Windows built-in VSS provider. To create a shadow copy, this built-in provider does not make a complete copy of the data. Instead, it keeps a copy of a block data before a change overwrites it, in a dedicated storage area. The logical volume can be restored to its initial consistent state, by combining the actual volume data with the initial data of the changed blocks.

The DiskShadow command create instructs the VSS service to proceed with the creation of shadow copies, including the release of I/O operations by the VSS writers after the shadow copies are created. Therefore, the EBS snapshots created by the next command exec may not be fully consistent.

Note: A workaround could be to build your own VSS provider in charge of creating EBS snapshots. Doing so would enable the EBS snapshots to be created before I/O operations are released. We will not develop this solution in this blog post.

Therefore, you need to “undo” any I/O operations that may have happened between the moment when the shadow copy was created and the moment when the EBS snapshots were created.

A solution consists of creating an EBS volume from the snapshot, attaching it to an intermediate Windows instance and to “revert” the VSS shadow copy to restore the EBS volume to a consistent state. For sake of simplicity, use the Windows instance that was backed up as the intermediate instance.

To manually restore an EBS snapshot to a consistent state:

  1. In the Amazon EC2 console, choose Instances.
  2. In the search box, enter Consistent EBS Snapshots – Windows with VSS. The search results should display a single instance. Note the Availability Zone for this instance.
  3. Choose Snapshots.
  4. Select the latest snapshot with the description “Consistent snapshot of Windows with VSS” and choose Actions, Create Volume.
  5. Select the same Availability Zone as the instance and choose Create, Volumes.
  6. Select the volume that was just created and choose Actions, Attach Volume.
  7. For Instance, choose Consistent EBS Snapshots – Windows with VSS and choose Attach.
  8. Choose Run Command, Run a command.
  9. In Command document, select the name of a SSM document to restore snapshots returned by CloudFormation. For Target instances, select the Windows and choose Run.

Run Command executes the following PowerShell script on the Windows instance. It retrieves the list of offline disks—which corresponds in this case to the EBS volume that you just attached—and for each offline disk, takes it online, revert existing shadow copies and takes it offline again.

$OfflineDisks = (Get-Disk |? {$_.OperationalStatus -eq "Offline"})

foreach ($OfflineDisk in $OfflineDisks) {
  Set-Disk -Number $OfflineDisk.Number -IsOffline $False
  Set-Disk -Number $OfflineDisk.Number -IsReadonly $False
  Write-Host "Disk " $OfflineDisk.Signature " is now online"
}

$ShadowCopyIds = (Get-CimInstance Win32_ShadowCopy).Id
Write-Host "Number of shadow copies found: " $ShadowCopyIds.Count

foreach ($ShadowCopyId in $ShadowCopyIds) {
  "revert " + $ShadowCopyId | diskshadow
}

foreach ($OfflineDisk in $OfflineDisks) {
  $CurrentSignature = (Get-Disk -Number $OfflineDisk.Number).Signature
  if ($OfflineDisk.Signature -eq $CurrentSignature) {
    Set-Disk -Number $OfflineDisk.Number -IsReadonly $True
    Set-Disk -Number $OfflineDisk.Number -IsOffline $True
    Write-Host "Disk " $OfflineDisk.Number " is now offline"
  }
  else {
    Set-Disk -Number $OfflineDisk.Number -Signature $OfflineDisk.Signature
    Write-Host "Reverting to the initial disk signature: " $OfflineDisk.Signature
  }
}

The EBS volume is now in a consistent state and can be detached from the intermediate instance.

Conclusion

In this series of blog posts, I showed how you can use Amazon EC2 Systems Manager to create consistent EBS snapshots on a daily basis, with two practical examples for Linux and Windows. You can adapt this solution to your own requirements. For example, you may develop scripts for your own applications.

If you have questions or suggestions, please comment below.

Analyzing VPC Flow Logs with Amazon Kinesis Firehose, Amazon Athena, and Amazon QuickSight

$
0
0

Post Syndicated from Ian Robinson original https://aws.amazon.com/blogs/big-data/analyzing-vpc-flow-logs-with-amazon-kinesis-firehose-amazon-athena-and-amazon-quicksight/

Many business and operational processes require you to analyze large volumes of frequently updated data. Log analysis, for example, involves querying and visualizing large volumes of log data to identify behavioral patterns, understand application processing flows, and investigate and diagnose issues.

VPC flow logs capture information about the IP traffic going to and from network interfaces in VPCs in the Amazon VPC service. The logs allow you to investigate network traffic patterns and identify threats and risks across your VPC estate. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

Flow logs can help you with a number of tasks. For example, you can use them to troubleshoot why specific traffic is not reaching an instance, which in turn can help you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.

This blog post shows how to build a serverless architecture by using Amazon Kinesis Firehose, AWS Lambda, Amazon S3, Amazon Athena, and Amazon QuickSight to collect, store, query, and visualize flow logs. In building this solution, you will also learn how to implement Athena best practices with regard to compressing and partitioning data so as to reduce query latencies and drive down query costs.

Summary of the solution

The solution described here is divided into three parts:

  • Send VPC Flow Logs to S3 for Analysis with Athena. This section describes how to use Lambda and Firehose to publish flow log data to S3, and how to create a table in Athena so that you can query this data.
  • Visualize Your Logs in QuickSight. Here you’ll learn how to use QuickSight and its Athena connector to build flow log analysis dashboards that you can share with other users in your organization.
  • Partition Your Data in Athena for Improved Query Performance and Reduced Costs. This section shows how you can use a Lambda function to automatically partition Athena data as it arrives in S3. This function will work with any Firehose stream and any other delivery mechanism that writes data to S3 using a year/month/day/hour prefix.

Partitioning your data is one of three strategies for improving Athena query performance and reducing costs. The other two are compressing your data, and converting it into columnar formats such as Apache Parquet. The solution described here automatically compresses your data, but it doesn’t convert it into a columnar format. Even if you don’t convert your data to a columnar format, as is the case here, it’s always worth compressing and partitioning it. For any large-scale solution, you should also consider converting it to Parquet.

Serverless Architecture for Analyzing VPC Flow Logs

Below is a diagram showing how the various services work together.

VPC_Flowlogs_Ian_Ben

When you create a flow log for a VPC, the log data is published to a log group in CloudWatch Logs. By using a CloudWatch Logs subscription, you can send a real-time feed of these log events to a Lambda function that uses Firehose to write the log data to S3.

Once the flow log data starts arriving in S3, you can write ad hoc SQL queries against it using Athena. For users that prefer to build dashboards and interactively explore the data in a visual manner, QuickSight allows you to easily build rich visualizations on top of Athena.

Send VPC Flow Logs to S3 for Analysis with Athena

In this section, we’ll describe how to send flow log data to S3 so that you can query it with Athena. The examples here use the us-east-1 region, but any region containing both Athena and Firehose can be used.

Create the Firehose delivery stream

Follow the steps described here to create a Firehose delivery stream with a new or existing S3 bucket as the destination. Keep most of the default settings, but select an AWS Identity and Access Management (IAM) role that has write access to your S3 bucket and specify GZIP compression. Name the delivery stream ‘VPCFlowLogsDefaultToS3’.

Create a VPC flow log

First, follow these steps to turn on VPC flow logs for your default VPC.

Create an IAM role for Lambda to write to Firehose

Before you create a Lambda function to deliver logs to Firehose, you need to create an IAM role that allows Lambda to write batches of records to Firehose. Create a role named ‘lambda_kinesis_exec_role’ by following the steps below.

First, embed the following inline access policy.

 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "firehose:PutRecordBatch"
            ],
            "Resource": [
                "arn:aws:firehose:*:*:deliverystream/VPCFlowLogsDefaultToS3"
            ]
        }
    ]
}

Then, attach the following trust relationship to enable Lambda to assume this role.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create a Lambda Function to deliver CloudWatch Logs to Firehose

To create a Lambda function for delivering log events from CloudWatch to your ‘VPCFlowLogsDefaultToS3’ Firehose delivery stream, do the following:

  1. From the Lambda console, create a new Lambda function and select Blank Function.
  2. Choose Next when asked to configure a trigger.
  3. On the Configure function page, name the function ‘VPCFlowLogsToFirehose’.

Select the Python run-time, and copy this code from GitHub into the code pane.

Add an environment variable named DELIVERY_STREAM_NAME whose value is the name of the delivery stream created in the first step of this walk-through (‘VPCFlowLogsDefaultToS3’):

o_vpc-flow_2

  1. Specify the ‘lambda_kinesis_exec_role’ you created in the previous step, and set the timeout to one minute.

Create a CloudWatch subscription for your Lambda function

Within CloudWatch Logs, take the following steps:

  1. Choose the log group for your VPC flow logs (you might need to wait a few minutes for the log group to show up if the flow logs were just created).
  2. For Actions, choose Stream to AWS Lambda.

o_vpc-flow_3

  1. Select the ‘VPCFlowLogsToFirehose’ Lambda function that was created in the previous step.
  2. For the format, choose Amazon VPC Flow Logs.

o_vpc-flow_4-1

Create an external table in Athena

Amazon Athena allows you to query data in S3 using standard SQL without having to provision or manage any infrastructure. Athena works with a variety of common data formats, including CSV, JSON, Parquet, and ORC, so there’s no need to transform your data prior to querying it. You simply define your schema, and then run queries using the query editor in the AWS Management Console or programmatically using the Athena JDBC driver.

Athena stores your database and table definitions in a data catalog compatible with the Hive metastore. For this example, you’ll create a single table definition over your flow log files. The DDL for this table is specified later in this section. Before executing this DDL, take note of the following:

  • You will need to replace <bucket_and_prefix> with the name of the Firehose destination for your flow log data (including the prefix).
  • The CREATE TABLE definition includes the EXTERNAL keyword. If you omit this keyword, Athena will return an error. EXTERNAL ensures that the table metadata is stored in the data catalog without impacting the underlying data stored on S3. If you drop an external table, the table metadata is deleted from the catalog, but your data remains in S3.
  • The columns for the vpc_flow_logs table map to the fields in a flow log record. Flow log records comprise space-separated strings. To parse the fields from each record, Athena uses a serializer-deserializer class, or SerDe, which is a custom library that tells Athena how to handle your data.
  • The DDL specified here uses a regular expression SerDe to parse the space-separated flow log records. The regular expression itself is supplied using the “input.regex” SerDe property.

In the Athena query editor, enter the DDL below, and choose Run Query.

CREATE EXTERNAL TABLE IF NOT EXISTS vpc_flow_logs (
Version INT,
Account STRING,
InterfaceId STRING,
SourceAddress STRING,
DestinationAddress STRING,
SourcePort INT,
DestinationPort INT,
Protocol INT,
Packets INT,
Bytes INT,
StartTime INT,
EndTime INT,
Action STRING,
LogStatus STRING
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
    "input.regex" = "^([^ ]+)\\s+([0-9]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([0-9]+)\\s+([0-9]+)\\s+([^ ]+)\\s+([^ ]+)$")
LOCATION 's3://<bucket_and_prefix>/';

 

Query your data in Athena

After creating the table, you should be able to select the eye icon next to the table name to see a sample set of rows.

o_vpc-flow_5

You can easily run various queries to investigate your flow logs. Here is an example that gets the top 25 source IPs for rejected traffic:

 

SELECT sourceaddress, count(*) cnt
FROM vpc_flow_logs
WHERE action = 'REJECT'
GROUP BY sourceaddress
ORDER BY cnt desc
LIMIT 25;

o_vpc-flow_6

Visualize Your Logs in QuickSight

QuickSight allows you to visualize your Athena tables with a few simple clicks. You can sign up for QuickSight using your AWS account and get 1 user and 1 GB of SPICE capacity for free.

Before connecting QuickSight to Athena, make sure to grant QuickSight access to Athena and the associated S3 buckets in your account as described here. You can then create a new data set in QuickSight based on the Athena table you created.

Log into QuickSight and choose Manage data, New data set. Choose Athena as a new data source.

o_vpc-flow_7

Name your data source “AthenaDataSource”. Select the default schema and the vpc_flow_logs table.

o_vpc-flow_8

Choose Edit/Preview data. For starttime and endtime, set the data format as a date rather than  a number. These two fields represent the start and end times of the capture window for the flow logs and come into the system as Unix seconds timestamps.

o_vpc-flow_9

Now select Save and visualize.

Let’s look at the start times for the different capture windows and the amount of bytes that were sent. We’ll do this by selecting StartTime and Bytes from the field list. Notice QuickSight will automatically display a time chart with the amount of traffic. You can easily change the date parameter to set different time granularities.
Here is an example showing a large spike of traffic for one day. This tells us that there was a lot of traffic on this day compared to the other days being plotted.

vpc-flow_10

Here is an example showing a large spike of traffic for one day. This tells us that there was a lot of traffic on this day compared to the other days being plotted.

o_vpc-flow_11

You can easily build a rich analysis of REJECT and ACCEPT traffic across ports, IP addresses, and other facets of your data. You can then publish this analysis as a dashboard that can be shared with other QuickSight users in your organization.

o_vpc-flow_12

Partition Your Data in Athena for Improved Query Performance and Reduced Costs

The solution described so far delivers GZIP-compressed flow log files to S3 on a frequent basis. Firehose places these files under a /year/month/day/hour/ key in the bucket you specified when creating the delivery stream. The external table definition you used when creating the vpc_flow_logs table in Athena encompasses all the files located within this time series keyspace.

Athena is priced per query based on the amount of data scanned by the query. With our existing solution, each query will scan all the files that have been delivered to S3. As the number of VPC flow log files increases, the amount of data scanned will also increase, which will affect both query latency and query cost.

You can reduce your query costs and get better performance by compressing your data, partitioning it, and converting it into columnar formats. Firehose has already been configured to compress the data delivered to S3. Now we will look at partitioning. (Converting the data to a columnar format, like Apache Parquet, is out of scope for this article.)

Partitioning your table helps you restrict the amount of data scanned by each query. Many tables benefit from being partitioned by time, particularly when the majority of queries include a time-based range restriction. Athena uses the Hive partitioning format, whereby partitions are separated into folders whose names contain key-value pairs that directly reflect the partitioning scheme (see the Athena documentation for more details).

The folder structure created by Firehose (for example, s3://my-vpc-flow-logs/2017/01/14/09/’) is different from the Hive partitioning format (for example, s3://my-vpc-flow-logs/dt=2017-01-14-09-00/). However, using ALTER TABLE ADD PARTITION, you can manually add partitions and map them to portions of the keyspace created by the delivery stream.

The solution presented here uses a Lambda function and the Athena JDBC driver to execute ALTER TABLE ADD PARTITION statements on receipt of new files into S3, thereby automatically creating new partitions for Firehose delivery streams.

Create an IAM role for Lambda to execute Athena queries

Before you create the Lambda function, you will need to create an IAM role that allows Lambda to execute queries in Athena. Create a role named ‘lambda_athena_exec_role’ by following the instructions here.

First, embed the following inline access policy.

 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "athena:RunQuery",
                "athena:GetQueryExecution",
                "athena:GetQueryExecutions",
                "athena:GetQueryResults"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:ListBucketMultipartUploads",
                "s3:ListMultipartUploadParts",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::aws-athena-query-results-*"
            ]
        }
    ]
}

 

Then, attach the following trust relationship to enable Lambda to assume this role.

 

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create a partitioned version of the vpc_flow_log table

The vpc_flow_log external table that you previously defined in Athena isn’t partitioned. To create a table with a partition named ‘IngestDateTime’, drop the original, and then recreate it using the following modified DDL.

 

DROP TABLE IF EXISTS vpc_flow_logs;

CREATE EXTERNAL TABLE IF NOT EXISTS vpc_flow_logs (
Version INT,
Account STRING,
InterfaceId STRING,
SourceAddress STRING,
DestinationAddress STRING,
SourcePort INT,
DestinationPort INT,
Protocol INT,
Packets INT,
Bytes INT,
StartTime INT,
EndTime INT,
Action STRING,
LogStatus STRING
)
PARTITIONED BY (IngestDateTime STRING)
ROW FORMAT SERDE ‘org.apache.hadoop.hive.serde2.RegexSerDe’
WITH SERDEPROPERTIES (
    “input.regex” = “^([^ ]+)\\s+([0-9]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([^ ]+)\\s+([0-9]+)\\s+([0-9]+)\\s+([^ ]+)\\s+([^ ]+)$”)
LOCATION ‘s3://<bucket_and_prefix>/’;

Create a Lambda function to create Athena partitions

To create the Lambda function:

  1. Clone the Lambda Java project from GitHub.
  2. Compile the .jar file according to the instructions in the README file, and copy it to a bucket in S3.
  3. Create a new Lambda function and select Blank Function.
  4. Choose Next when asked to configure a trigger.
  5. On the Configure function page, name the function ‘CreateAthenaPartitions’.
  6. Select the Java run-time.
  7. For Code entry type, choose Upload a file from Amazon S3.
  8. For the S3 link URL, enter the HTTPS-format URL of the .jar file you uploaded to S3.
  9. For the Lambda function, you’ll need to set several environment variables:
  • PARTITION_TYPE: Supply one of the following values: Month, Day, or Hour. This environment variable is optional. If you omit it, the Lambda function will default to creating new partitions every day. For this example, supply ‘Hour’.
  • TABLE_NAME: Use the format <database>.<table_name>—for example, ‘default.vpc_flow_logs’.
  • S3_STAGING_DIR: An Amazon S3 location to which your query output will be written. (Although the Lambda function is only executing DDL statements, Athena still writes an output file to S3. The IAM policy that you created earlier assumes that the query output bucket name begins with ‘aws-athena-query-results-’.)
  • ATHENA_REGION: The region in which Athena is located. For this example, use ‘us-east-1’.

o_vpc-flow_13

  1. Now specify the handler and role:
  • Handler: com.amazonaws.services.lambda.CreateAthenaPartitionsBasedOnS3Event::handleRequest
  • Role: Select ‘Choose an existing role’
  • Existing role: Select ‘lambda_athena_exec_role’

o_vpc-flow_14

  1. Finally, set the timeout to one minute.

Configure S3 to send new object notifications to your Lambda function

On the Properties page for the bucket containing your VPC flow log data, expand the Events pane and create a new notification:

  • Name: FlowLogDataReceived
  • Events: ObjectCreated(All)
  • Send To: Lambda function
  • Select the ‘CreateAthenaPartitions’ Lambda function from the dropdown.

o_vpc-flow_15

Now, whenever new files are delivered to your S3 bucket by Firehose, your ‘CreateAthenaPartitions’ Lambda function will be triggered. The function parses the newly received object’s key. Based upon the year/month/day/hour portion of the key, together with the PARTITION_TYPE you specified when creating the function (Month, Day, or Hour), the function determines which partition the file belongs in. It will then query Athena to determine whether this partition already exists. If the partition doesn’t exist, the function will create the partition, mapping it to the relevant portion of the S3 keyspace.

Let’s examine this logic in a bit more detail. Assume you’ve configured your ‘CreateAthenaPartitions’ Lambda function to create hourly partitions, and that Firehose has just delivered a file containing flow log data to s3://my-vpc-flow-logs/2017/01/14/07/xxxx.gz

Looking at the S3 key for this new file, the Lambda function will infer that it belongs in an hourly partition whose spec is ‘2017-01-14-07’. On checking Athena, the function discovers that this partition does not exist, so it executes the following DDL statement.

ALTER TABLE default.vpc_flow_logs ADD PARTITION (IngestDateTime='2017-01-14-07') LOCATION 's3://my-vpc-flow-logs/2017/01/14/07/';

If the Lambda function had been configured to create daily partitions, the new partition would be mapped to ‘s3://my-vpc-flow-logs/2017/01/14/’; if monthly, the LOCATION would be ‘s3://my-vpc-flow-logs/2017/01/’.

Note that the partitions represent the date and time at which the logs were ingested into S3, which will be some time after the StartTime and EndTime values for the individual records in each partition.

Query the partitioned data using Athena

Your queries can now take advantage of the partitions.

 

SELECT sourceaddress, count(*) cnt
FROM vpc_flow_logs
WHERE ingestdatetime > '2017-01-15-00'
AND action = 'REJECT'
GROUP BY sourceaddress
ORDER BY cnt desc
LIMIT 25;

To query the data ingested over the course of the last three hours, run the following query (assuming you’re using an hourly partitioning scheme).

 

SELECT sourceaddress, count(*) cnt
FROM vpc_flow_logs
WHERE date_parse(ingestdatetime, '%Y-%m-%d-%H') >=
      date_trunc('hour', current_timestamp - interval '2' hour)
AND action = 'REJECT'
GROUP BY sourceaddress
ORDER BY cnt desc
LIMIT 25;

As the following screenshots show, by using partitions you can reduce the amount of data scanned per query. In so doing, you can reduce query costs and latencies. The first screenshot shows a query that ignores partitions.

vpc-flow_1o_6

This second screenshot shows the use of partitions in the WHERE clause.

o_vpc-flow_17

As you can see, by using partitions this query runs in half the time and scans less than a tenth of the data scanned by the first query.

Conclusion

In the past, to analyze logs you had to extensively prepare data for specific query use cases or provision and operate storage and compute resources. With Amazon Athena and Amazon QuickSight, you can now publish, store, analyze, and visualize log data more flexibly. Instead of focusing on the underlying infrastructure needed to perform the queries and visualize the data, you can focus on investigating the logs.


About the Authors

ben_snively_90

Ben Snively is a Public Sector Specialist Solutions Architect. He works with government, non-profit and education customers on big data and analytical projects, helping them build solutions using AWS. In his spare time he adds IoT sensors throughout his house and runs analytics on it.

 

Ian_pic_1_resizedIan Robinson is a Specialist Solutions Architect for Data and Analytics. He works with customers throughout EMEA, helping them to use AWS to create value from the connections in their data. In his spare time he’s currently restoring a reproduction 1960s Dalek.

 

Related

Analyze Security, Compliance, and Operational Activity Using AWS CloudTrail and Amazon Athena

o_athena-cloudtrail_1

Creating a Simple “Fetch & Run” AWS Batch Job

$
0
0

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/creating-a-simple-fetch-and-run-aws-batch-job/

Dougal Ballantyne
Dougal Ballantyne, Principal Product Manager – AWS Batch

Docker enables you to create highly customized images that are used to execute your jobs. These images allow you to easily share complex applications between teams and even organizations. However, sometimes you might just need to run a script!

This post details the steps to create and run a simple “fetch & run” job in AWS Batch. AWS Batch executes jobs as Docker containers using Amazon ECS. You build a simple Docker image containing a helper application that can download your script or even a zip file from Amazon S3. AWS Batch then launches an instance of your container image to retrieve your script and run your job.

AWS Batch overview

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.

With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 Spot Instances.

“Fetch & run” walkthrough

The following steps get everything working:

  • Build a Docker image with the fetch & run script
  • Create an Amazon ECR repository for the image
  • Push the built image to ECR
  • Create a simple job script and upload it to S3
  • Create an IAM role to be used by jobs to access S3
  • Create a job definition that uses the built image
  • Submit and run a job that execute the job script from S3

Prerequisites

Before you get started, there a few things to prepare. If this is the first time you have used AWS Batch, you should follow the Getting Started Guide and ensure you have a valid job queue and compute environment.

After you are up and running with AWS Batch, the next thing is to have an environment to build and register the Docker image to be used. For this post, register this image in an ECR repository. This is a private repository by default and can easily be used by AWS Batch jobs

You also need a working Docker environment to complete the walkthrough. For the examples, I used Docker for Mac. Alternatively, you could easily launch an EC2 instance running Amazon Linux and install Docker.

You need the AWS CLI installed. For more information, see Installing the AWS Command Line Interface.

Building the fetch & run Docker image

The fetch & run Docker image is based on Amazon Linux. It includes a simple script that reads some environment variables and then uses the AWS CLI to download the job script (or zip file) to be executed.

To get started, download the source code from the aws-batch-helpers GitHub repository. The following link pulls the latest version: https://github.com/awslabs/aws-batch-helpers/archive/master.zip. Unzip the downloaded file and navigate to the “fetch-and-run” folder. Inside this folder are two files:

  • Dockerfile
  • fetchandrun.sh

Dockerfile is used by Docker to build an image. Look at the contents; you should see something like the following:

FROM amazonlinux:latest

RUN yum -y install unzip aws-cli
ADD fetch_and_run.sh /usr/local/bin/fetch_and_run.sh
WORKDIR /tmp
USER nobody

ENTRYPOINT ["/usr/local/bin/fetch_and_run.sh"]
  • The FROM line instructs Docker to pull the base image from the amazonlinux repository, using the latest tag.
  • The RUN line executes a shell command as part of the image build process.
  • The ADD line, copies the fetchandrun.sh script into the /usr/local/bin directory inside the image.
  • The WORKDIR line, sets the default directory to /tmp when the image is used to start a container.
  • The USER line sets the default user that the container executes as.
  • Finally, the ENTRYPOINT line instructs Docker to call the /usr/local/bin/fetchandrun.sh script when it starts the container. When running as an AWS Batch job, it is passed the contents of the command parameter.

Now, build the Docker image! Assuming that the docker command is in your PATH and you don’t need sudo to access it, you can build the image with the following command (note the dot at the end of the command):

docker build -t awsbatch/fetch_and_run .   

This command should produce an output similar to the following:

Sending build context to Docker daemon 373.8 kB

Step 1/6 : FROM amazonlinux:latest
latest: Pulling from library/amazonlinux
c9141092a50d: Pull complete
Digest: sha256:2010c88ac1e7c118d61793eec71dcfe0e276d72b38dd86bd3e49da1f8c48bf54
Status: Downloaded newer image for amazonlinux:latest
 ---> 8ae6f52035b5
Step 2/6 : RUN yum -y install unzip aws-cli
 ---> Running in e49cba995ea6
Loaded plugins: ovl, priorities
Resolving Dependencies
--> Running transaction check
---> Package aws-cli.noarch 0:1.11.29-1.45.amzn1 will be installed

  << removed for brevity >>

Complete!
 ---> b30dfc9b1b0e
Removing intermediate container e49cba995ea6
Step 3/6 : ADD fetch_and_run.sh /usr/local/bin/fetch_and_run.sh
 ---> 256343139922
Removing intermediate container 326092094ede
Step 4/6 : WORKDIR /tmp
 ---> 5a8660e40d85
Removing intermediate container b48a7b9c7b74
Step 5/6 : USER nobody
 ---> Running in 72c2be3af547
 ---> fb17633a64fe
Removing intermediate container 72c2be3af547
Step 6/6 : ENTRYPOINT /usr/local/bin/fetch_and_run.sh
 ---> Running in aa454b301d37
 ---> fe753d94c372

Removing intermediate container aa454b301d37
Successfully built 9aa226c28efc

In addition, you should see a new local repository called fetchandrun, when you run the following command:

docker images
REPOSITORY               TAG              IMAGE ID            CREATED             SIZE
awsbatch/fetch_and_run   latest           9aa226c28efc        19 seconds ago      374 MB
amazonlinux              latest           8ae6f52035b5        5 weeks ago         292 MB

To add more packages to the image, you could update the RUN line or add a second one, right after it.

Creating an ECR repository

The next step is to create an ECR repository to store the Docker image, so that it can be retrieved by AWS Batch when running jobs.

  1. In the ECR console, choose Get Started or Create repository.
  2. Enter a name for the repository, for example: awsbatch/fetchandrun.
  3. Choose Next step and follow the instructions.

    fetchAndRunBatch_1.png

You can keep the console open, as the tips can be helpful.

Push the built image to ECR

Now that you have a Docker image and an ECR repository, it is time to push the image to the repository. Use the following AWS CLI commands, if you have used the previous example names. Replace the AWS account number in red with your own account.

aws ecr get-login --region us-east-1

docker tag awsbatch/fetch_and_run:latest 012345678901.dkr.ecr.us-east-1.amazonaws.com/awsbatch/fetch_and_run:latest

docker push 012345678901.dkr.ecr.us-east-1.amazonaws.com/awsbatch/fetch_and_run:latest

Create a simple job script and upload to S3

Next, create and upload a simple job script that is executed using the fetchandrun image that you just built and registered in ECR. Start by creating a file called myjob.sh with the example content below:

!/bin/bash

date
echo "Args: $@"
env
echo "This is my simple test job!."
echo "jobId: $AWS_BATCH_JOB_ID"
echo "jobQueue: $AWS_BATCH_JQ_NAME"
echo "computeEnvironment: $AWS_BATCH_CE_NAME"
sleep $1
date
echo "bye bye!!"

Upload the script to an S3 bucket.

aws s3 cp myjob.sh s3://<bucket>/myjob.sh

Create an IAM role

When the fetchandrun image runs as an AWS Batch job, it fetches the job script from Amazon S3. You need an IAM role that the AWS Batch job can use to access S3.

  1. In the IAM console, choose Roles, Create New Role.
  2. Enter a name for your new role, for example: batchJobRole, and choose Next Step.
  3. For Role Type, under AWS Service Roles, choose Select next to “Amazon EC2 Container Service Task Role” and then choose Next Step.

    fetchAndRunBatch_2.png

  4. On the Attach Policy page, type “AmazonS3ReadOnlyAccess” into the Filter field and then select the check box for that policy.

    fetchAndRunBatch_3.png

  5. Choose Next Step, Create Role. You see the details of the new role.

    fetchAndRunBatch_4.png

Create a job definition

Now that you’ve have created all the resources needed, pull everything together and build a job definition that you can use to run one or many AWS Batch jobs.

  1. In the AWS Batch console, choose Job Definitions, Create.
  2. For the Job Definition, enter a name, for example, fetchandrun.
  3. For IAM Role, choose the role that you created earlier, batchJobRole.
  4. For ECR Repository URI, enter the URI where the fetchandrun image was pushed, for example: 012345678901.dkr.ecr.us-east-1.amazonaws.com/awsbatch/fetchandrun.
  5. Leave the Command field blank.
  6. For vCPUs, enter 1. For Memory, enter 500.

    fetchAndRunBatch_5.png

  7. For User, enter “nobody”.

  8. Choose Create job definition.

Submit and run a job

Now, submit and run a job that uses the fetchandrun image to download the job script and execute it.

  1. In the AWS Batch console, choose Jobs, Submit Job.
  2. Enter a name for the job, for example: script_test.
  3. Choose the latest fetchandrun job definition.
  4. For Job Queue, choose a queue, for example: first-run-job-queue.
  5. For Command, enter myjob.sh,60.
  6. Choose Validate Command.

    fetchAndRunBatch_6.png

  7. Enter the following environment variables and then choose Submit job.

    • Key=BATCHFILETYPE, Value=script
    • Key=BATCHFILES3_URL, Value=s3:///myjob.sh. Don’t forget to use the correct URL for your file.

    fetchAndRunBatch_7.png

  8. After the job is completed, check the final status in the console.

    fetchAndRunBatch_8.png

  9. In the job details page, you can also choose View logs for this job in CloudWatch console to see your job log.

    fetchAndRunBatch_9.png

How the fetch and run image works

The fetchandrun image works as a combination of the Docker ENTRYPOINT and COMMAND feature, and a shell script that reads environment variables set as part of the AWS Batch job. When building the Docker image, it starts with a base image from Amazon Linux and installs a few packages from the yum repository. This becomes the execution environment for the job.

If the script you planned to run needed more packages, you would add them using the RUN parameter in the Dockerfile. You could even change it to a different base image such as Ubuntu, by updating the FROM parameter.

Next, the fetchandrun.sh script is added to the image and set as the container ENTRYPOINT. The script simply reads some environment variables and then downloads and runs the script/zip file from S3. It is looking for the following environment variables BATCHFILETYPE and BATCHFILES3URL. If you run fetchand_run.sh, with no environment variables, you get the following usage message:

  • BATCHFILETYPE not set, unable to determine type (zip/script) of

Usage:

export BATCH_FILE_TYPE="script"

export BATCH_FILE_S3_URL="s3://my-bucket/my-script"

fetch_and_run.sh script-from-s3 [ <script arguments> ]

– or –

export BATCH_FILE_TYPE="zip"

export BATCH_FILE_S3_URL="s3://my-bucket/my-zip"

fetch_and_run.sh script-from-zip [ <script arguments> ]

This shows that it supports two values for BATCHFILETYPE, either “script” or “zip”. When you set “script”, it causes fetchandrun.sh to download a single file and then execute it, in addition to passing in any further arguments to the script. If you set it to “zip”, this causes fetchandrun.sh to download a zip file, then unpack it and execute the script name passed and any further arguments. You can use the “zip” option to pass more complex jobs with all the applications dependencies in one file.

Finally, the ENTRYPOINT parameter tells Docker to execute the /usr/local/bin/fetchandrun.sh script when creating a container. In addition, it passes the contents of the COMMAND parameter as arguments to the script. This is what enables you to pass the script and arguments to be executed by the fetchandrun image with the Command field in the SubmitJob API action call.

Summary

In this post, I detailed the steps to create and run a simple “fetch & run” job in AWS Batch. You can now easily use the same job definition to run as many jobs as you need by uploading a job script to Amazon S3 and calling SubmitJob with the appropriate environment variables.

If you have questions or suggestions, please comment below.

Make a PIR speaker system

$
0
0

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/make-a-pir-speaker-system/

I enjoy projects that can be made using items from around the home. Add a Raspberry Pi and a few lines of code, and great joy can be had from producing something smart, connected and/or just plain silly.

The concept of the IoT Smart Lobby Welcoming Music System fits into this category. Take a speaker, add a Raspberry Pi and a PIR sensor (both staples of any maker household, and worthwhile investments for the budding builder), and you can create a motion-sensor welcome system for your home or office.

[DIY] Make a smart lobby music system for your office or home

With this project, you will be able to automate a welcoming music for either your smart home or your smart office. As long as someone is around, the music will keep playing your favorite playlist at home or a welcome music to greet your customers or business partners while they wait in the lobby of your office.

The Naran Build

IoT makers Naran have published their Smart Lobby build on Instructables, where you’ll find all the code and information you need to get making. You’ll also find their original walkthrough of how to use their free Prota OS for Raspberry Pi, which allows you to turn your Pi into a Smart Home hub.

Naran Prota IoT Sensor Speaker System

Their build allows you to use Telegram Bot to control the music played through their speaker. The music begins when movement is sensed, and you can control what happens next.

Telegram Bot for a Sensor Speaker System

It’s a great build for playing information for visitors or alerting you to an intrusion.

Tim Peake Welcoming Committee

A few months back, I made something similar in the lobby at Pi Towers:  I hid a sensor under our cardboard cutout of ESA astronaut Tim Peake. Visitors walking into the lobby triggered the sensor, and were treated to the opening music from 2001: A Space Odyssey.

Sadly, with the meeting rooms across the lobby in constant use, the prank didn’t last long.

Alex J’rassic on Twitter

In honour of the #Principia anniversary, I pimped out cardboard @astro_timpeake at @Raspberry_Pi Towers. Listen. https://t.co/MBUOjrARtI

If you’re curious, the Christmas tree should be a clue as to why Tim is dressed like a nativity angel.

The Homebrew Edition

If you’re like me, you learn best by doing. Our free resources allow you to develop new skills as you build. You can then blend the skills you have learned to create your own interesting projects. I was very new to digital making when I put together the music sensor in the lobby. The skills I had developed by following step-by-step project tutorials provided the foundations for something new and original.

Why not make your own welcoming system? The process could teach you new skills, and develop your understanding of the Raspberry Pi. If you’d like to have a go, I’d suggest trying out the Parent Detector. This will show you how to use a PIR sensor with your Raspberry Pi. Once you understand that process, try the Burping Jelly Baby project. This will teach you how to tell your Raspberry Pi when to play an MP3 based on a trigger, such as the poke of a finger or the detection of movement.

From there, you should have all the tools you need to make a speaker system that plays an MP3 when someone or something approaches. Why not have a go this weekend? If you do, tell us about your final build in the comments below.

The post Make a PIR speaker system appeared first on Raspberry Pi.

New Automated DMCA Notices Hit Movie Pirates With $300 Fines

$
0
0

Post Syndicated from Ernesto original https://torrentfreak.com/new-automated-dmca-notices-hit-movie-pirates-300-fines-170311/

Many Hollywood insiders see online piracy as a major threat, but very few are willing to target file-sharers with lawsuits or settlement demands.

Voltage Pictures was one of the pioneers on this front, at least in the US. Together with their legal team and BitTorrent tracking partner, the filmmakers have sued tens of thousands of people since 2010.

Initially, this was a very lucrative practice, as rightsholders were able to join many defendants in a single lawsuit. However, nowadays courts are more reserved, which is one of the reasons they started to look for alternatives.

One interesting development on this front is the company “Rights Enforcement.” This outfit tracks down BitTorrent pirates, but instead of taking them to court they send automated ‘fines’ via DMCA notices, asking for a $300 settlement.

By using the DMCA notice process, the rightsholders avoid expensive lawsuits. It also makes the settlement process easier to scale, since they can send out tens of thousands of ‘fines’ at once with limited resources.

While these schemes are not new, Rightscorp and CEG TEK have done the same, Rights Enforcement has a nasty sting in store for accused pirates.

The company is operated by lawyer Carl Crowell, who is best known for his work with various notorious copyright trolls. This includes the aforementioned Voltage Pictures, which filed lawsuits for several movies such as Dallas Buyers Club and The Hurt Locker.

These ties appear to be still intact, as the Rights Enforcement company lists several movies on its client list, many of which are linked to Voltage Pictures. Dallas Buyers Club is on there for example, as well as I.T., Mr. Church, Fathers & Daughters, Pay the Ghost, The Cobbler, and Good Kill.

Rights Enforcement Website

The client list suggests that that the makers of these movies are now trying to extract settlement money from alleged file-sharers through automated settlements, because this is cheaper and possibly more profitable.

This is also what Rights Enforcement suggests on its website:

“Online infringement, including ‘peer-to-peer’ copying of material across the Internet is pervasive. Too often parties and rights holders are forced into the expensive forum of the courts,” the outfit writes.

“With filing fees of $400 and copyright damages in some jurisdictions reaching $150,000 for a single act, we work to permit rights holders to notify and address infringers and resolve their claims in an efficient and cost effective manner.”

The ‘sting’ with Rights Enforcement, is that they have a team of known ‘troll’ lawyers lined up to wave the legal stick. In other words, if targeted subscribers are unwilling to pay but mistakenly identify themselves, they can still be taken to court.

They are not shy to use this threat either. In their automated DMCA settlement notices Rights Enforcement warns that a failure to cooperate can lead to legal action.

“You may consider this a notice of potential lawsuit, a demand for the infringing activity to terminate, and a demand for damages from the actual infringer,” the automated email reads.

“We invite your voluntary cooperation in assisting us with this matter, identifying the infringer, and ensuring that this activity stops. Should the infringing activity continue we may file a civil lawsuit seeking judicial relief.”

It’s currently unknown who does the BitTorrent tracking, but according to defense lawyer Robert Cashman, it’s likely that the German outfit Guardaley is involved.

TorrentFreak spoke with Cashman, who has represented several accused pirates in the past. He is warning people against Rights Enforcement, describing it as a “monster” and the “evil twin” of settlement outfit CEG TEK.

The lawyer believes that the evidence used by Rights Enforcement might lead to inaccurate accusations, which Rights Enforcement will pursue in an aggressive fashion.

“So in essence, Right Enforcement will be a monster. It’ll be an evil version of what CEG-TEK strove to become,” Cashman says.

The link with CEG TEK comes up because it stopped sending out settlement requests recently. The company, which represented a current Rights Enforcement client in the recent past, now states on its website that it’s no longer offering settlement services. That said, we haven’t been able to find a direct link between the two outfits.

On a similar note, Rights Enforcement “boss” Carl Crowell was previously hired by another settlement firm, Rightscorp. While this may have served as inspiration, we haven’t seen any direct ties.

One thing’s for sure, though. Given the outspokenness of Crowell and the aggressive tactics he and other partners have employed in the past, this is certainly not the last time we’ll hear of Rights Enforcement.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Viewing all 771 articles
Browse latest View live