Planet Friends of Vienna.rb

Updated Friday, 19 January 2018 07:00
{}
Vienna.js - JavaScript Meetup ( Feed )
Monday, 28 August 2017
viennajs.org monthly meetup

♦vienna.js Vienna JavaScript User Group


Everybody is welcome & feel free to share this invitation!


Orga: Franz Enzenhofer

Talks:

YOUR TALK HERE
You want to practice speaking in front of others? You want to tell everyone about some cool Ja

photovienna.js Vienna JavaScript User Group


Everybody is welcome & feel free to share this invitation!


Orga: Franz Enzenhofer

Talks:

YOUR TALK HERE
You want to practice speaking in front of others? You want to tell everyone about some cool JavaScript topic? This is your chance, we are still looking for talks! Write on the meetup wall! 


Sponsors:

We are currently looking for sponsors to cover free beer and maybe free pizza. 
Contact us at fp@codeq.at

Waltzing - gamify everything

We create playful solutions for education in schools, universities and companies. Our users learn faster, teach better and obtain more accurate test results through play.

At Waltzing, we create custom education experiences where playful learing reaches the next level. Changing the perspective you learn without studying, remember without memorizing and understand without hitting the books. 

Ludwig Ruderstallerl.

cwd.at GmbH



Angular-Wien-Workshop

Angular Senkrechtstart: Workshop in Wien

DO, 5. 10. und FR, 6. 10. 2017

In diesem Workshop erfahren Sie vom österreichischen O'Reilly-Autor Manfred Steyer und von Angular-Vienna-Organisator Michael Hladky anhand eines durchgängigen Beispiels, welche Konzepte hinter dem populären JavaScript-Framework Angular stehen und wie Sie es in Ihren Projekten nutzen können. Darüber hinaus erhalten Sie während der Übungen selbst die Möglichkeit, eine erste eigene Anwendung mit Angular zu schreiben. Am Ende liegt eine vollständige Anwendung vor, die Sie als Vorlage für Ihre eigenen Vorhaben nutzen können.
Der Workshop findet in Zusammenarbeit mit dem Verein Angular Austria statt.

Frühbucherpreis bis 22. 8. 2017

Details und Anmeldung:

https://www.eventbrit...

Angular Deep Dive: Workshop in Wien

MO, 20. 11. und DI, 21. 11. 2017

In diesem weiterführenden Workshop erfahren Sie vom österreichischen O'Reilly-Autor Manfred Steyer und von Angular-Vienna-Organisator Michael Hladky anhand eines durchgängigen Beispiels, wie Sie erweiterte Konzepte von Angular in Ihren Projekten nutzen können. Darüber hinaus erhalten Sie während der Übungen die Möglichkeit, tieferes Verständnis von Angular mittels einer eigenen Anwendung zu entwickeln. Am Ende liegt eine vollständige Anwendung vor, die Sie als Vorlage für Ihre eigenen Vorhaben nutzen können.

Der Workshop findet in Zusammenarbeit mit dem Verein Angular Austria statt.

Frühbucherpreis bis 6. 10. 2017

Details und Anmeldung:

https://www.eventbrit...

Vienna - Austria

Wednesday, September 27 at 7:00 PM

6

https://www.meetup.com/viennajs/events/236300243/

{}
Internet of Things (IoT) Vienna Meetup ( Feed )
Saturday, 26 August 2017
Topic Team Security: Security by Design

♦IoT Austria - Local Group Vienna

Details, Veranstaltungsort und Anmeldung unter www.eventbrit...

Vienna - Austria

Tuesday, September 5 at 6:00 PM

4

www.meetup.com/IoT-Vienna/events/242855639/

photoIoT Austria - Local Group Vienna

Details, Veranstaltungsort und Anmeldung unter https://www.eventbrit...

Vienna - Austria

Tuesday, September 5 at 6:00 PM

4

https://www.meetup.com/IoT-Vienna/events/242855639/

{}
Vienna.js - JavaScript Meetup ( Feed )
Friday, 25 August 2017
ViennaJS August Meetup

♦vienna.js Vienna JavaScript User Group


Bring your Minitalks! Everybody is welcome & feel free to share this invitation!

Talks:

We are looking for some great talks!
When you want to speak about cool JavaScript topic write on the meetup wall or to mail@ro

photovienna.js Vienna JavaScript User Group


Bring your Minitalks! Everybody is welcome & feel free to share this invitation!

Talks:

We are looking for some great talks!
When you want to speak about cool JavaScript topic write on the meetup wall or to mail@rolandschuetz.at 

Javascript Enterprise Architecture
Florian Bauer
Hands-on approach on how to shift an organization to modern Javascript architecture including a deep dive into project management, technical decisions and architectural challenges explained by code, components and task runner configurations.

Zipping files fun
Clemens Helm
This follow up talk from Clemens will talk more about zipping files in the browser and nodejs.

Sponsors:

We are currently looking for sponsors to cover free beer and maybe free pizza. 
Contact us at fp@codeq.at

The most innovative and easy-to-use technology that enables truly personal customer interactions.

Do you want to make a real difference?
...working with exciting technology?
...in a supportive environment?
Then take on responsibility in one of our teams!

Check out our open positions here!


KIVU is looking for an ambitious frontend developer who will join a diverse team of developers, working on the graphical user interface to a novel data science platform and graph database. Working together with other teams and customers you are an avid listener and capable of translating geek speak to non-technical people. You will closely work together with a team of data scientists and a team of backend engineers who will provide the interfaces for the data. Interfacing with other teams and team members you will make use of UML diagrams and use Confluence for documentation. Development is managed and synchronized with help of the SCRUM process. 


Angular-Wien-Workshop


Es gibt 2 teil, Senkrechtstart und Deep Dive.
Im folgenden sind die auszusendenden Infos für beide Wörkshops.
Angular Senkrechtstart: Workshop in Wien

DO, 5. 10. und FR, 6. 10. 2017

In diesem Workshop erfahren Sie vom österreichischen O'Reilly-Autor Manfred Steyer und von Angular-Vienna-Organisator Michael Hladky anhand eines durchgängigen Beispiels, welche Konzepte hinter dem populären JavaScript-Framework Angular stehen und wie Sie es in Ihren Projekten nutzen können. Darüber hinaus erhalten Sie während der Übungen selbst die Möglichkeit, eine erste eigene Anwendung mit Angular zu schreiben. Am Ende liegt eine vollständige Anwendung vor, die Sie als Vorlage für Ihre eigenen Vorhaben nutzen können.
Der Workshop findet in Zusammenarbeit mit dem Verein Angular Austria statt.
Frühbucherpreis bis 22. 8. 2017

Details und Anmeldung:

https://www.eventbrit...


Angular Deep Dive: Workshop in Wien

MO, 20. 11. und DI, 21. 11. 2017

In diesem weiterführenden Workshop erfahren Sie vom österreichischen O'Reilly-Autor Manfred Steyer und von Angular-Vienna-Organisator Michael Hladky anhand eines durchgängigen Beispiels, wie Sie erweiterte Konzepte von Angular in Ihren Projekten nutzen können. Darüber hinaus erhalten Sie während der Übungen die Möglichkeit, tieferes Verständnis von Angular mittels einer eigenen Anwendung zu entwickeln. Am Ende liegt eine vollständige Anwendung vor, die Sie als Vorlage für Ihre eigenen Vorhaben nutzen können.

Der Workshop findet in Zusammenarbeit mit dem Verein Angular Austria statt.

Frühbucherpreis bis 6. 10. 2017

Details und Anmeldung:

https://www.eventbrit...



We are a bunch of techy travel experts on a mission to enrich people's lives through touring. With over 30 different nationalities in our team and offices spread across Australia, Europe and North America, we work to deliver the best possible advice and tour booking experience to our customers.

CTO: http://www.tourradar....

Full Stack PHP Engineers: http://www.tourradar.....

DevOps: http://www.tourradar....

Product Managers: http://www.tourradar....

UX Designer: http://www.tourradar....


Vienna - Austria

Wednesday, August 30 at 7:00 PM

81

https://www.meetup.com/viennajs/events/236300242/

How to Implement a GraphQL API in Rails

Reading Time: 8 minutesGraphQL came out of Facebook a number of years ago as a way to solve a few different issues that typical RESTful APIs are prone to. One of those was the issue of under- or over-fetching data. Under-fetching is when the client has to make multiple roundtrips to the server jus

Reading Time: 8 minutes

GraphQL came out of Facebook a number of years ago as a way to solve a few different issues that typical RESTful APIs are prone to. One of those was the issue of under- or over-fetching data.

Under-fetching is when the client has to make multiple roundtrips to the server just to satisfy the data needs they have. For example, the first request is to get a book, and a follow-up request is to get the reviews for that book. Two roundtrips is costly, especially when dealing with mobile devices on suspect networks.

Over-fetching is when you only need specific data, such as the name + email of a user, but since the API doesn’t know when you need, it sends you additional information which will be ignored, such as address fields, photo, etc.

With GraphQL, you describe to the server exactly what you are looking for, no more, no less. A typical request might look like this, asking for some information about rental properties along with the name of the owner:

query {
  rentals {
    id
    beds
    owner {
      name
    }
  }
}

The response from the server arrives as follows:

{
  "data": {
    "rentals": [
      {
        "id": "203",
        "beds": 2,
        "owner": {
          "name": "Berniece Anderson"
        }
      },
      {
        "id": "202",
        "beds": 1,
        "owner": {
          "name": "Zola Hilll"
        }
      }
    ]
  }
}

The request ends up looking remarkably similar to the response. We described the exact data we were looking for and that is how it arrived back to us. If you’re absolutely brand-new to GraphQL, I recommend the website https://www.howtographql.com, which provides great examples in a variety of frontend/backend technologies.

In this article, we will explore how to implement a GraphQL API in Rails, something that giants such as GitHub and Shopify are already using in production. The application we’ll be working with is available on GitHub.


“How GraphQL solves the issue of under- and over-fetching.” via @leighchalliday
Click To Tweet


Getting Started

We’ll start with a fresh Rails installation: rails new landbnb --database=postgresql --skip-test.

We’ll be working with three models for this app:

  • Rental: The house/apartment being rented
  • User: The User which owns the Rental (owner) or books a Rental (guest)
  • Booking: A User staying at a Rental for a specified period of time

Because the point of this article isn’t to cover DB migrations and model setup, please refer to the migrations and models provided in the GitHub repository. I have also created a seed file to provide us with some initial data to play around with. Simply run the command bundle exec rake db:seed.

Installing GraphQL

Now it’s time to actually get to the GraphQL part of our Rails app.

Add the graphql gem to your Gemfile and then run the command rails generate graphql:install. This will create a new app/graphql folder, which is where we’ll spend the majority of our time. It has also added a route for us along with a new controller. Unlike typical Rails apps, we’ll almost never be working inside of the controller or the routes files.

NOTE: In the app/graphql/landbnb_schema.rb file, comment out the mutation line until we have built mutations. It was giving me an error!

In GraphQL, there are three “root” types: – query: Fetching data… think of GET requests – mutation: Modifying data… think of POST or PUT requests – subscription: Real-time updates… think of ActionCable or websockets.

Queries

We’ll begin by defining our first query to fetch all of the rentals:

# app/graphql/types/query_type.rb
Types::QueryType = GraphQL::ObjectType.define do
  name "Query"

  field :rentals, !types[Types::RentalType] do
    resolve -> (obj, args, ctx) {
      Rental.all
    }
  end
end

By defining a field called rentals, it has given us the ability to perform a query with the field rentals:

query {
  rentals {
    id
  }
}

If we explore the code a little more, the field is made up of two parts: a type and a resolver.

The type is the type of data this field will return. !types[Types::RentalType] means that it will return a non-null value, which is an array of something called a Types::RentalType. We’ll look at that in a second.

We’ve also passed a block that defines a resolver. A resolver is basically us telling our code how to fill out the data for the rentals field. In this case, we’ll just include a naive Rental.all command. Don’t worry about what obj, args, ctx are, as we’ll explore them more later.

Next, we need to define what the Types::RentalType is:

# app/graphql/types/rental_type.rb
Types::RentalType = GraphQL::ObjectType.define do
  name 'Rental'

  field :id, !types.ID
  field :rental_type, !types.String
  field :accommodates, !types.Int
  # ... other fields ...
  field :postal_code, types.String

  field :owner, Types::UserType do
    resolve -> (obj, args, ctx) { obj.user }
  end
  field :bookings, !types[Types::BookingType]
end

The above code reminds me a little of defining JSON serializers. The object we are serializing in this case is an instance of the Rental model, and we’re defining which fields are available to be queried (along with their types). The owner is slightly different because we don’t actually have an owner field on the model. By providing a resolver, we can resolve owner to the object’s user field.

You can run this query by visiting http://localhost:3000/graphiql in the browser and using the GraphiQL tool, which allows you to perform GraphQL queries and explore the API.

We’ll have to create types for the User and Booking as well, but they look very similar.

!Sign up for a free Codeship Account

Queries with arguments

What if we wanted to allow the user to provide additional data to the query they are making? For example, the ability to say how many rentals should be returned via a limit argument. This is done when defining the rentals field, which we’ll update to the code below:

# app/graphql/types/query_type.rb
Types::QueryType = GraphQL::ObjectType.define do
  name "Query"

  field :rentals, !types[Types::RentalType] do
    argument :limit, types.Int, default_value: 20, prepare: -> (limit) { [limit, 30].min }
    resolve -> (obj, args, ctx) {
      Rental.limit(args[:limit]).order(id: :desc)
    }
  end
end

What we have done is to state that the rentals field can contain a limit argument, which must be an integer. We also provided a default value and a “preparing” function to massage the argument a little bit before it is used.

You’ll notice that our resolve lambda now takes advantage of the args parameter to access the limit argument. We can now perform the query like this:

query {
  rentals(limit: 5) {
    id
  }
}

Mutations

So far, we have only queried the data, but it is time to modify it! We’ll do this by creating a mutation to allow the user to sign in. Our query will look like this:

mutation {
  signInUser(email: {email: "test6@email", password: "secret"}) {
    token
    user {
      id
      name
      email
    }
  }
}

Note that the “root” type is now mutation. We’ve provided some arguments to the signInUser field and have specified that in return we want the token (a JWT that we’ll generate) and a few fields from the user.

First, ensure that your mutation line in the app/graphql/landbnb_schema.rb file is uncommented if you had commented it out previously. Then we’ll add a signInUser field to our mutation file:

# app/graphql/types/mutation_type.rb
Types::MutationType = GraphQL::ObjectType.define do
  name "Mutation"

  field :signInUser, function: Mutations::SignInUser.new
end

And finally, we’ll write the code to handle resolving that field, which will live in its own file making it easier to test in isolation.

# app/graphql/mutations/sign_in_user.rb
class Mutations::SignInUser < GraphQL::Function
  # define the arguments this field will receive
  argument :email, !Types::AuthProviderEmailInput

  # define what this field will return
  type Types::AuthenticateType

  # resolve the field's response
  def call(obj, args, ctx)
    input = args[:email]
    return unless input

    user = User.find_by(email: input[:email])
    return unless user
    return unless user.authenticate(input[:password])

    OpenStruct.new({
      token: AuthToken.token(user),
      user: user
    })
  end
end

The AuthToken class is a small PORO that I’ve put inside of the models folder. It uses the json_web_token gem.

# app/models/auth_token.rb
class AuthToken
  def self.key
    Rails.application.secrets.secret_key_base
  end

  def self.token(user)
    payload = {user_id: user.id}
    JsonWebToken.sign(payload, key: key)
  end

  def self.verify(token)
    result = JsonWebToken.verify(token, key: key)
    return nil if result[:error]
    User.find_by(id: result[:ok][:user_id])
  end
end

Authentication

Now that we’ve provided the token in response to the signInUser mutation, we’ll expect that the token is passed in a header for subsequent requests.

With GraphiQL, you can define headers sent automatically with each request in the config/initializers/graphiql.rb file (remember, this is for development only). I’ve used the dotenv gem to store the JWT_TOKEN during development.

if Rails.env.development?
  GraphiQL::Rails.config.headers['Authorization'] = -> (_ctx) {
    "bearer #{ENV['JWT_TOKEN']}"
  }
end

We’ll now need to modify the controller to correctly pass the current_user in as the context to our GraphQL code.

# app/controllers/graphql_controller.rb
def execute
  # ...
  context = {
    current_user: current_user
  }
  #...
end

private

def current_user
  return nil if request.headers['Authorization'].blank?
  token = request.headers['Authorization'].split(' ').last
  return nil if token.blank?
  AuthToken.verify(token)
end

If we look at our bookRental mutation, we can now grab the current user using this Authorization token. First, add the bookRental field to the mutations file: field :bookRental, function: Mutations::BookRental.new. We’ll now take a look at the actual mutation code:

# app/graphql/mutations/book_rental.rb
class Mutations::BookRental < GraphQL::Function
  # define the required input arguments for this mutation
  argument :rental_id, !types.Int
  argument :start_date, !types.String
  argument :stop_date, !types.String
  argument :guests, !types.Int

  # define what the return type will be
  type Types::BookingType

  # resolve the field, perfoming the mutation and its response
  def call(obj, args, ctx)
    # Raise an exception if no user is present
    if ctx[:current_user].blank?
      raise GraphQL::ExecutionError.new("Authentication required")
    end

    rental = Rental.find(args[:rental_id])

    booking = rental.bookings.create!(
      user: ctx[:current_user],
      start_date: args[:start_date],
      stop_date: args[:stop_date],
      guests: args[:guests]
    )

    booking
  rescue ActiveRecord::RecordNotFound => e
    GraphQL::ExecutionError.new("No Rental with ID #{args[:rental_id]} found.")
  rescue ActiveRecord::RecordInvalid => e
    GraphQL::ExecutionError.new("Invalid input: #{e.record.errors.full_messages.join(', ')}")
  end
end

Notice that we also handled errors for when the Rental ID was invalid or there were validation errors with the booking (for example, missing information or invalid booking dates).

Conclusion

What we’ve looked at is how to get up and running with GraphQL in Rails. We’ve defined queries, mutations, and a number of different types. We’ve also learned how to provide arguments to fields and how to authenticate a user using JSON Web Tokens.

In my next article, we’ll look at how to guard our application from a few potential performance threats.


“How to Implement a GraphQL API in Rails” via @leighchalliday
Click To Tweet


The post How to Implement a GraphQL API in Rails appeared first on via @codeship.

{}
Internet of Things (IoT) Vienna Meetup ( Feed )
Wednesday, 23 August 2017
Topic Team Blockchain: Treffen August 2017

♦IoT Austria - Local Group Vienna

Details, Veranstaltungsort und Anmeldung unter www.eventbrit...

Vienna - Austria

Tuesday, August 29 at 6:00 PM

46

www.meetup.com/IoT-Vienna/events/242215913/

photoIoT Austria - Local Group Vienna

Details, Veranstaltungsort und Anmeldung unter https://www.eventbrit...

Vienna - Austria

Tuesday, August 29 at 6:00 PM

46

https://www.meetup.com/IoT-Vienna/events/242215913/

Tools and Practices for Documenting Microservices

Reading Time: 6 minutesI will assume you are at least familiar with the concept of microservices — loosely coupled services that provide discrete solutions to business use cases that you can combine to solve current needs and demand. The architectural pattern has gained popularity over the p

Reading Time: 6 minutes

I will assume you are at least familiar with the concept of microservices — loosely coupled services that provide discrete solutions to business use cases that you can combine to solve current needs and demand. The architectural pattern has gained popularity over the past years, and although not everyone is completely sure what “doing it right” looks like, it’s a concept that suits modern needs and is here to stay for the foreseeable future.

I help organize the Write the Docs (a global community for those interested in technical documentation) group in Berlin. Over the past month, multiple people asked me about what tools and practices I recommend for documenting microservices and application architectures that use the pattern.

Some light Googling later, I found others asking the same question, but no concrete recommendations, so thought it was time to set ideas down. I intend this post to set out the problem, pose some solutions and provoke discussion for those in the field. These are merely my musings, but together we can determine what best practice might be, and create ideas for actual tooling to help.


“Musing about tools and practices recommended for documenting microservices.” via @ChrisChinch
Click To Tweet


Defining the Problem

Each microservice is in essence a “typical” application. In many ways, you can follow standard best practices for documenting each of them (and if you need help with that, I recommend my ‘A Documentation Crash Course for Developers‘ post). In my opinion, the area where developers are stuck is visualizing and documenting how the microservices interact.

Topic-Based Documentation

Before we move into forward-thinking, I want to take you all back into a documentation practice that has existed for some time but has potential use here, at least conceptually. Topic-based documentation breaks documentation down into discrete concepts (topics) that you can then assemble to suit particular documentation use cases.

For example a Getting Started guide for developers might combine installation, configuration, and running topics. Where a Getting Started guide for users might combine configuration, running, and commands topics. As you can see, it combines discrete content items to suit different use cases. Sound familiar?

Note that the traditional tooling for topic-based documentation may not be entirely appropriate for this use case, as it’s often expensive, proprietary, and in itself, monolithic. However, we can certainly borrow elements of the idea and tooling.

Display All Endpoints

In this Stack Overflow post, the poster asks how to display all endpoints across all services, no matter which services are public, active, and which endpoints within them are the same. Using the approach set out above, we could create a page that queries all our services marked as active and public, and all the endpoints within that are the same. If you add or remove a service or endpoint, then the page will update to reflect this.

Display Intersection of Endpoints

A more complex need might be showing how the services interact at an application level. A service calls another service using an endpoint, joined by a parameter. Or to put it another way, the user service queries the order service to find out the orders a user has made, using their user ID to query.

!Sign up for a free Codeship Account

Endpoint Explanation

Great, but so far this approach is purely about demonstrating endpoint functionality. What about the conceptual explanation of how these fit together in a microservice-based application? Again, ideally these snippets of explanation should borrow from the architectural paradigm and the topic-based approach I mentioned, and usable in different and varied contexts.

As with your code, you should consider breaking down these explanations into discrete and reusable components. For example, if a user arrives at your application to see an order status, this could involve several services: authentication, user records, order listings, and order status.

A user arriving to check account details could involve the authentication, user records, and an account service. Therefore, you should keep the conceptual explanation of each of these services separated, likely in the repository of the service. In fact, this is probably what you’re already doing.

I propose you add extra snippets of documentation in a “documentation assembly” service that contain details as to how each of the potential intersections work. For example, a file that describes how the user record service calls the order service, and another file that describes how the user record service calls the account service. In such a simple example as this one, including this explanation in the API documentation may be enough, but there may also be times where you need more.

Tooling for the Documentation Assembly Service

How you handle the assembly of the different sources of information is up to you. Much like in the coding world, the documentation world has a myriad of tooling available, and you decide what suits you best. To suit the microservice architecture, this assembly should be a service itself, and you should consider tooling that can happily run in containers, serverless instances, or similar. Fortunately, documentation generation and hosting is not generally a high-impact service, so is easier to maintain.

No current tooling will do everything for you, so I will present pieces of the puzzle that I feel you could adapt to work well and how they might help. I will also present a handful of alternatives for different markup languages, but will leave further investigation and research to you, the comments section, or get in touch with me.

As most markup languages and API specs are all parsable formats, a competent programmer should also be able to roll their own custom solutions if nothing I present helps.

It’s worth noting that there are some commercial services or CMS-like systems that could handle some of these processes for you, but I feel this goes against the microservice mentality.

Conversion

To enable the combination of documentation in different formats to ease management and rendering, you might need to convert to create a unified format.

  • Pandoc – One of my favorite tools. Converts between a wide variety of markup formats, but no API specification formats.
  • Swagger2Markup – Converts Swagger to AsciiDoc or Markdown.
  • API Spec Converter – Converts between Swagger (V1 and 2), Open API 3, API Blueprint, RAML, WADL, and others.
  • apib2swagger – Converts API Blueprint to Swagger.
  • swagger2blueprint – Converts Swagger to API Blueprint.
  • Apimatic Transformer (online) – Converts between a wide variety of specifications including Postman.
  • apiary2postman – Convert API Blueprint to Postman.
  • Blueman – Convert API Blueprint to Postman.
  • apib2json – Convert API Blueprint to JSON.

Transclusion

Transclusion is a term that I use to mean including the contents of one document in another. You might call it linking, inclusion, cross-referencing, or something else. But for our purposes, it will be how we include a variety of sources of information (API references and linking explanatory text) into a series of files for rendering. Many markup languages will do this for you by default, while others will need ‘encouragement’.

  • Markdown doesn’t include other files by default, but you have options with hercule, MultiMarkdown or, as part of your rendering pipeline, a static site generator like Jekyll.
  • Asciidoctor is a widely used toolchain for Asciidoc seamlessly handles including other sources.
  • reStructuredText can include external files by default.
  • If you want to enter the topic-based world, then dita includes cross-referencing for code and text. Docbook has text objects and includes.

Rendering

Rendering your assembled files into HTML, PDF, ePub, or another format is a default behavior of every documentation markup language, so dig into the documentation of whichever format you choose to pick an option.

Create the service(s)

I can’t dictate what your documentation service(s) will need, but it should be possible to use containers to manage your dependencies and then a bunch of scripts to check out, assemble, render, and serve documentation. Extra points if you parameter-ize the service(s) to generate different documentation based on what you feed in. For example, toggles to include individual APIs or snippets based on need or use case.

Next Steps

Okay, I admit, I haven’t told you exactly what to do in this article. Rather, I presented a series of potential ideas and resources to spark discussion, and you’re possibly none the wiser than when you started reading.

However, what else could you throw into the mix? Testing would be an obvious start, and I suggest you read my earlier posts on testing aspects of documentation for more ideas. You could add other services to render documentation in different formats or ways, feed support systems or social media, or create an API for your API documentation. As any microservices fan knows, once you work through the complexities of smashing apart the monolith, the possibilities are endless.


“Tools and Practices for Documenting Microservices” via @ChrisChinch
Click To Tweet


The post Tools and Practices for Documenting Microservices appeared first on via @codeship.

The Basics of the Docker Run Command

Reading Time: 6 minutesFor many Docker enthusiasts, the docker run command is a familiar one. It’s often the first Docker command we learn. The docker run command is the command used to launch Docker containers. As such, it’s familiar to anyone starting or running Docker containers on

Reading Time: 6 minutes

For many Docker enthusiasts, the docker run command is a familiar one. It’s often the first Docker command we learn. The docker run command is the command used to launch Docker containers. As such, it’s familiar to anyone starting or running Docker containers on a daily basis.

In this article, we will get back to the basics and explore a few simple docker run examples. During these examples, we will use the standard redis container image to show various ways to start a container instance.

While these examples may be basic, they are useful for anyone new to Docker.


“New to Docker? Explore the basics of the docker run command.” via @madflojo
Click To Tweet


Just Plain Ol’ Docker Run

The first example is the most basic. We’ll use the docker run command to start a single redis container.

$ docker run redis
1:C 16 Jul 08:19:15.330 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf

We can see that not only did we start the container, but we did so in “attached” mode. By default, if no parameters or flags are passed, Docker will start the container in “attached” mode. This means that the output from the running process is displayed on the terminal session.

It also means that the terminal session has been hijacked by the running container, if we were to press ctrl+c, for example. We would then stop the redis service and as such stop the container.

If we leave the terminal session alone and open another terminal session, we can execute the docker ps command. With this, we can see the container in a running status.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
1b83ac544e95        redis               "docker-entrypoint..."   2 minutes ago       Up 2 minutes        6379/tcp            loving_bell

From the docker ps command above, we can see quite a bit about the running container, but one thing sticks out more than others. That is the name of the container: loving_bell.

Using the --name Parameter

By default, Docker will create a unique name for each container started. The names are generated from a list of descriptions (ie, “boring” or “hungry”) and famous scientists or hackers (ie, Wozniak, Ritchie). It’s possible however, to specify a name for our container. We can do so by simply using the --name parameter when executing docker run.

$ docker run --name redis redis
1:C 16 Jul 08:22:17.296 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf

In the above example, we used the --name parameter to start a redis container named redis. If we once again run the docker ps command, we can see that our container is running, this time with our specified name.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
67bbd0858ef5        redis               "docker-entrypoint..."   30 seconds ago      Up 27 seconds       6379/tcp            redis

Using --name to limit the number of containers running

The --name parameter is a useful option to know. Not only does naming a container make it easier to reference the container when executing Docker commands, but naming the container can also be used to control the number of containers that run on a single host.

To explain this in a bit more detail, let’s see what happens if we try to start another container named redis without stopping the previous redis container.

$ docker run --name redis redis
docker: Error response from daemon: Conflict. The container name "/redis" is already in use by container "67bbd0858ef5b1782875166b4c5e6c1589b28a99d130742a3e68f62b6926195f". You have to remove (or rename) that container to be able to reuse that name.

We can see one very important fact about running containers: With Docker, you are not allowed to run multiple containers with the same name. This is useful to know if you need to run multiple instances of a single container.

It is also useful to know this limitation if you wish to only run one instance of a specific container per host. A common use case for many users of Docker is to use the --name as a safety check against automated tools launching multiple Docker containers.

By specifying a name within the automated tool, you are essentially ensuring that automated tools can only start one instance of the specified container.

!Sign up for a free Codeship Account

Using -d to Detach the Container

Another useful parameter to pass to docker run is the -d flag. This flag causes Docker to start the container in “detached” mode. A simple way to think of this is to think of -d as running the container in “the background,” just like any other Unix process.

Rather than hijacking the terminal and showing the application’s output, Docker will start the container in detached mode.

$ docker run -d redis
19267ab19aedb852c69e2bd6a776d9706c540259740aaf4878d0324f9e95af10
$ docker run -d redis
0f3cb6199d442822ecfc8ce6a946b72e07cf329b6516d4252b4e2720058c702b

The -d flag is useful when starting containers that you wish to run for long periods of time. Which, if you are using Docker to run services, is generally the case. In attached mode, a container is linked with the terminal session.

Using -d is a simple way to detach the container on start.

Using -p to Publish Container Ports

In the examples above, all of our redis containers have been inaccessible for anything outside of the internal Docker service. The reason for this is because we have not published any ports to connect to redis. To publish a port via docker run, we simply need to add the -p flag.

$ docker run -d -p 6379:6379 --name redis redis
2138279e7d29234defd2b9f212e65d47b9a0f3e422165b4e4025e466f25bbc2b

In the above example, we used the -p flag to publish the 6379 port from the host to the container to port 6379 within the container. This means anyone connecting to this host over port 6379 will be routed to the container via port 6379.

The syntax for this flag is host_ip:host_port:container_port, with the host IP being optional. If we wanted to see what ports were mapped on a previously running container, we can use the docker ps command.

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
2138279e7d29        redis               "docker-entrypoint..."   22 seconds ago      Up 21 seconds       0.0.0.0:6379->6379/tcp   redis

We can see that our host is listening across all interfaces (0.0.0.0) on port 6379, and that traffic is being redirected to port 6379 within the container.

Another useful tip regarding the -p flag is that you are able to specify it multiple times. This comes in handy if the container in question uses multiple ports.

Using -v to Mount Host Volumes

The last option we are going to explore is one that can be very important to anyone running containers that require persistent storage. This option is the -v flag.

The -v flag is used to define volume mounts. Volume mounts are a way to share storage volumes across multiple containers. In the example below, we are sharing the /tmp/data directory on the host as /data to the container.

$ docker run -d -p 6379:6379 --name redis -v /tmp/data:/data redis
23de16619b5983107c60dad00a0a312ee18e526f89b26a6863fef5cdc70c8426

The example above makes it to where anything written to /data within the container is actually accessing /tmp/data on the host. The same is true of anything being written to /tmp/data on the host; that data will also be available within /data in the container.

We can see this if we look within the /tmp/data directory on the host.

$ ls /tmp/data/
dump.rdb

This option is important for anyone running a database or application that requires persistence within Docker.

It is important, because any data that is written within the container is removed as soon as the container itself is removed. Essentially, if we were to simply spin up a redis instance without using volume maps, we could populate data within that redis instance. However, as soon as the container that hosts that instance is removed, the data within that redis instance is also removed.

By using volume mounts, we can keep the data located on the host (as seen above), allowing any other redis container that uses that same volume mount to start where the previous container left off.

Performance implications of volume mounts

Another key aspect of volume mounts is that the write speed to a volume mount is far greater than the write speed within the container’s filesystem. The reason for this is that the default container filesystem uses features such as thin provisioning and copy-on-write. These features can introduce latency for write-heavy applications.

By using a host-based volume mount, the containerized application can achieve the same write speeds as the host itself.

Summary

In this article, we covered quite a few options for the docker run command. However, while these options are key, they only scratch the surface of the available options to docker run. Some of the options are complex enough to deserve an article unto themselves.


“The Basics of the Docker Run Command” via @madflojo
Click To Tweet


The post The Basics of the Docker Run Command appeared first on via @codeship.

Centralized Team Management For Everyone on Codeship

Reading Time: 2 minutesTL;DR: We made changes to the Codeship Account Structure. Now, centralized management of teams, permissions and projects are available for all accounts. If you are a current Codeship user this does not have any impact on your subscription, plan and price, your invoices, or y

Reading Time: 2 minutes

TL;DR: We made changes to the Codeship Account Structure. Now, centralized management of teams, permissions and projects are available for all accounts. If you are a current Codeship user this does not have any impact on your subscription, plan and price, your invoices, or your applied discounts.

Organizations Accounts for Everyone on Codeship

We believe that CI and CD is for teams. Centralized team management and permission management is of highest importance for that. After receiving a lot of feedback, we decided to sunset our concept of Personal Accounts on Aug 19th, 2017 and migrated all accounts to be “Organization” accounts with access to all subscription plans including the free plan, as well as centralized team management.

manage_team_settings_codeship_organization_accounts

What Changed?

  • At sign up, all new accounts created are Organization Accounts, that have access to centralized team management and all available plans including the free plan from Aug 19th, 2017, onwards.

  • If you already have an account created prior Aug 19th, 2017, we migrated your Personal Account to an Organization Account, which gives you access to all plans and centralized team management for free.

As mentioned, none of these changes have any impact on your subscription, plan and price, your invoices, or your applied discounts.

We believe that with this change we are able to offer a better experience for all teams of all sizes with our Codeship Products and make it easy for you to manage your teams in one place. Please let me know in the comments if you have any questions or want to learn more.

The post Centralized Team Management For Everyone on Codeship appeared first on via @codeship.

What Is Google Functions and How to Use It with Codeship

Reading Time: 5 minutesYou may not have heard of Google Functions, since Google has been unusually quiet about its great new Google Cloud service. Even if you have heard about it, you may be a bit confused as to what exactly it is. It took me a bit of time to fully understand it, even though it’s

Reading Time: 5 minutes

You may not have heard of Google Functions, since Google has been unusually quiet about its great new Google Cloud service. Even if you have heard about it, you may be a bit confused as to what exactly it is. It took me a bit of time to fully understand it, even though it’s fundamentally very simple.

Google Functions, like its cousin AWS Lambda, represents something of a shift in application development by truly leveraging the increasing power and scope of an integrated cloud services platform, such as Google Cloud.


“Heard about Google Functions? Confused about it at all?” via @codeship
Click To Tweet


Google Functions Is All About Events

Want to have something automated happen any time a file is changed or added? You could handle this inside your application, or you could use monitoring or cron jobs. Or you could use Google Functions!

Functions aims to make these single-task, event-based actions much easier to write and maintain by abstracting them out to their own service with easy configuration for triggers, from direct integrations into events across the Google Cloud platform stack (making the power of the combined Google services stack increasingly powerful in aggregate) down to simple webhook-based events.

Writing Functions

Functions are written as Node.js modules, making them super simple to write and contain for the single tasks they serve. Functions can interact with the Google Cloud library and the resources available and can access data from the triggering event.

For instance, here’s an example function from the Google Functions documentation written based on a change to Google Cloud Storage:

/**
 * Background Cloud Function to be triggered by Cloud Storage.
 *
 * @param {object} event The Cloud Functions event.
 * @param {function} callback The callback function.
 */
exports.helloGCS = function (event, callback) {
  const file = event.data;

  if (file.resourceState === 'not_exists') {
    console.log(`File ${file.name} deleted.`);
  } else if (file.metageneration === '1') {
    // metageneration attribute is updated on metadata changes.
    // on create value is 1
    console.log(`File ${file.name} uploaded.`);
  } else {
    console.log(`File ${file.name} metadata updated.`);
  }

  callback();
};

While I won’t get into what is specifically happening here, the main takeaway is that this is simple JavaScript. A complete function can run an important, automated task in just a few lines of code with no internal application dependencies.

For more specific recipes and explanation, Google Functions has surprisingly good documentation for being such a young service.

!Sign up for a free Codeship Account

Deploying Google Functions With Codeship Pro

If you want to start testing out Google Functions, one important step is getting them out into the wild. You may want to combine this with some JavaScript linting or with your larger application or infrastructure deployments, as well.

Fortunately (as you might expect here on the Codeship blog), Codeship makes it easy to deploy Google Functions. We’re going to use Codeship Pro to show you how to do this.

Google’s deployment service

Since Codeship Pro uses Docker images you define to run all of your commands, one of the important parts of any Codeship Pro-based deployment is the Docker image that will be your deployment environment. In this case, that means a container running from an image built with the Google Cloud CLI and authentication setup defined.

Since we know that’s important, we’ve gone ahead and built an image and a starter repo that you can use to jumpstart any Google Cloud deployment.

The containers your commands run in are coordinated via your codeship-services.yml file, so you will need to define a Services file that includes at least one container for your application as well as the Google Cloud deployment service. Here’s an example:

app:
  build:
    image: your-name/your-image
    path: .
    dockerfile: Dockerfile
  encrypted_env_file: env.encrypted
  add_docker: true
  volumes:
    - ./deployment/:/deploy

deployment:
  image: codeship/google-cloud-deployment
  encrypted_env_file: google-auths.env
  add_docker: true
  volumes:
    - ./deployment/:/deploy

For a more complete breakdown on what’s happening, you can read through the codeship-services.yml file documentation. The important part is that the app service will run any code or tests we have, while the deployment service is pulling down the codeship/google-cloud-deployment image we maintain to build a container you can use in your CI/CD pipeline.

Authentication

In the above example, you are building a service named deployment from the codeship/google-cloud-deployment image that we maintain. This service will be the container that actually executes all of your Google Cloud commands. That works because the image provided installs the Google Cloud CLI and is configured to use the Google credentials you pass in as env vars for authentication.

Where the service says encrypted_env_file: google-auths.env, it is referencing a file you will need to create named google-auths.env. This file contains the Google account credentials you will be logging in with, which must be pegged to a Google-side Services Account with appropriate permissions.

Once you have dropped those credentials into a document, formatted as environment variables, you can use Codeship’s local CLI to encrypt them and keep everything secure. This encrypted version is what we are referencing in the line encrypted_env_file: google-auths.env.

Commands

Once you have the necessary services defined in your codeship-services.yml file, you will next need to create the file that passes those services all the commands you are trying to run. This file is called the codeship-steps.yml file.

In this case, we will write a simple example assuming you have Google Functions deployment commands written to a script in your repository, alongside the function module.

- name: google-functions-deployment
  service: deployment
  command: google-deploy.sh

This command is telling the deployment service defined earlier to run the google-deploy.sh script. Inside that script, you will have something similar to:

#!/bin/bash

# Authenticate with the Google Services
codeship_google authenticate

# switch to the directory containing your app.yml (or similar) configuration file
# note that your repository is mounted as a volume to the /deploy directory
cd /deploy/

# deploy the application
gcloud beta functions deploy "${CLOUD_FUNCTION_NAME}" --stage-bucket "${GOOGLE_CLOUD_BUCKET_NAME}" <TRIGGER>

The specifics of your deployment script will depend on what your Google Function is doing. As you can see, Functions has a deploy command in the CLI with required variables depending on the task.

More Google Functions Resources

This is a very high-level look at Google Functions, as well as deploying them via Codeship Pro. There is much more to know, including specifics around writing your functions as well as the deployment scripts you will need to run them. To learn more, we recommend checking out these resources:


“What Is Google Functions and How to Use It with Codeship” via @codeship
Click To Tweet


The post What Is Google Functions and How to Use It with Codeship appeared first on via @codeship.

Angular Vienna September 2017

Angular Vienna

Back from our well-earned summer break, we are welcoming the season with a special guest. Manfred Steyer - international speaker, GDE and O'Reilly author.

WE ARE STILL LOOKING FOR SPEAKERS AND SPONSORS

---------- Agenda -----------

• 18:30 - Warm up

Angular Vienna

Back from our well-earned summer break, we are welcoming the season with a special guest. Manfred Steyer - international speaker, GDE and O'Reilly author.

WE ARE STILL LOOKING FOR SPEAKERS AND SPONSORS

---------- Agenda -----------

• 18:30 - Warm up

• ~ 19:00 - Start

------------ Talks -------------

• Through the Sound Barrier: High-Performance Applications with Angular

There is not a sole adjusting screw for performance tuning in single page applications but even several influencing factors that need to be considered. This talk shows how to deal with them.

You'll learn how to leverage Ahead of Time Compilation (AOT), Tree Shaking and Lazy Loading to improve the startup performance of your application dramatically. For this, we'll look at the Angular Build Optimizer as well as at Google’s Closure compiler that allows aggressive optimizations. In addition to that you'll see how to use the optimization strategy OnPush to speed up data binding performance in your solutions. Furthermore, Service Worker for caching and instant loading will be covered as well as Server Side Rendering to improve the perceived loading time.

Bio of the speaker:

Trainer and Consultant with focus on Angular. Google Developer Expert (GDE) who writes for O'Reilly, Hanser and the German Java Magazine. Regularly speaks at conferences.

Manfred Steyer
SOFTWAREarchitekt.at
http://www.softwarear...


Sponsors:

KIVU 


KIVU is looking for an ambitious front-end developer to join a diverse team of developers, working on the graphical user interface to a novel data science platform and graph database.


Additional Infos:

Angular-Vienna and Manferd Steyer are giving a workshop together in Vienna. A big part of the money goes to our upcoming "association" Angular Austria.

So please attend the workshops. 
Senkrechtstarter
- Deep Dive


------------- Submit Talks ---------------

Please submit your talk by using the meetup contact form or by commenting below.

Our limit of scheduled talk time for each meetup is around 90 minutes. Lightning talks can also be given spontaneously, if there's enough time left.

• Normal talk (~20 to 40 min.)
• Short talk (~15 min.)
• Lightning talk (~5 min.)


------------- Sponsor Info ---------------

We are looking for a drink/pizza sponsor for this meetup and upcoming meetups.

Have a job ad to link, get some attention or just be nice - that's your chance!

Contact us for further details.

Vienna - Austria

Thursday, September 28 at 6:30 PM

29

https://www.meetup.com/Angular-Vienna/events/242334310/

{}
(clojure 'vienna) Meetup ( Feed )
Friday, 18 August 2017
Clojure Meetup September

♦(clojure 'vienna)

Join us for our September meetup! 

New location: The friendly people at abloom have been so kind to host us this time round!

If you have a project or something Clojure related you would like to talk about, just leave a comment below and we take it fr

photo(clojure 'vienna)

Join us for our September meetup! 

New location: The friendly people at abloom have been so kind to host us this time round!

If you have a project or something Clojure related you would like to talk about, just leave a comment below and we take it from there!


Vienna - Austria

Tuesday, September 12 at 7:00 PM

10

https://www.meetup.com/clojure-vienna/events/242618625/

Querying and Pagination with DynamoDB

Reading Time: 9 minutesThis is the second of a three-part series on working with DynamoDB. The first article covered the basics of DynamoDB, such as creating tables, working with items along with batch operations, and conditional writes. This post will focus on different ways you can query a table

Reading Time: 9 minutes

This is the second of a three-part series on working with DynamoDB. The first article covered the basics of DynamoDB, such as creating tables, working with items along with batch operations, and conditional writes.

This post will focus on different ways you can query a table in DynamoDB and, more important, when to take advantage of which operation efficiently, including error handling and pagination.


“Let’s talk about different ways you can query a table in DynamoDB” via @ParthModi08
Click To Tweet


In the first post, we created a users table that contains items with a user‘s information. Each user was identified by a unique email field, making email the ideal candidate for the Partition Key (or Hash Key). The Sort Key (or Range Key) of the Primary Key was intentionally kept blank.

Apart from the Primary Key, one Global Secondary Key was also created for the expected searching pattern of the users table. It’s intuitive that searching functionality will focus mainly on searching users based on their name and creation time, which makes the created_at field an ideal candidate for the Partition Key and the first_name field as the Sort Key.

Additionally, instead of normalizing Authentication Tokens and Addresses into separate tables, both pieces of information are kept in a users table.

Table Structure

Create a users table in DynamoDB with the above structure if you haven’t already. Check out the first post in the series if you get stuck.

Querying in DynamoDB comes in two flavors: query operation and scan operation. Both operations have different use cases.

Query Operation

The query operation in DynamoDB is different from how queries are performed in relational databases due to its structure. You can query only Primary Key and Secondary Key attributes from a table in DynamoDB.

The query operation finds items based on Primary Key values. You can query any table or secondary index that has a composite Primary Key (a Partition Key and a Sort Key). – AWS DynamoDB

Sounds constraining, as you cannot just fetch records from a database table by any field you need. But that’s where you compromise flexibility in querying with speed and flexibility of data storage.

NoSQL databases must be designed by keeping data access patterns and application requirements in mind, whereas SQL databases are designed around the structure of data and normalization. This distinction will help you tremendously in designing databases in NoSQL.

Querying with a Primary Index

Suppose you want to authenticate a user with the authenticate token. You will first need to fetch a user with email and check if the token provided by the user matches one of the tokens in the authentication_tokens attribute.

# models/user.rb
...
def self.find_by_email(email)
  if email
    begin
      query_params = {
        table_name: table_name,
        expression_attribute_names: {
          '#hash_key_name' => 'email'
        },
        expression_attribute_values: {
          ':hash_key_val' => email
        },
        key_condition_expression: '#hash_key_name = :hash_key_val',
        limit: 1
      }

      resp = $ddb.query(query_params)
      # return first item if items matching key condition expression found
      resp.items.try(:first)      
    rescue Aws::DynamoDB::Errors::ServiceError => e
      puts e.message
      nil
    end
  else
    nil
  end
end
...

An interesting part in the #find_by_email method is the query_params hash. query_params can be broken down into three main parts:

  • table_name – The name of the table against which the query is being performed.
  • expression_attribute_names and expression_attribute_values – These hold the alias of attribute names and attribute values respectively, which are to be used in query statement. By convention, expression_attribute_names are prepended with # and expression_attribute_values are prepended with :.
  • key_condition_expression – An actual query statement containing an attribute name alias with values and operators. The operators allowed in key_condition_expression are =, >, <, >=, <=, BETWEEN, and begins_with.

In every query, the Key condition expression must contain a Partition Key with a value and the equality operator = for that value. By this rule, all our queries will contain the #hash_key_name = :hash_key_val part. No operator other than = can be used with the Partition Key.

The Sort Key isn’t used in the above query, as specifying a Sort Key is optional in a query statement. The Sort Key is primarily used to narrow down results for items with a particular hash key.

The query operation always returns a result set in the items key, which can be empty if no record is found matching the criteria specified by the query. Also, the items in a result are in ascending order by default.

To get items in descending order, you need to pass scan_index_forward option as false in a query hash.

One other part that’s omitted in the above query is filter_expression. It filters out items from a result set that does not match filter_expression. filter_expression can only be applied to attributes other than Partition Key or Sort Key attributes. filter_expression is applied after a query finishes, but before results are returned.

Querying with Secondary Index

By default, a query operation is performed with Primary Index attributes. To perform querying with a Secondary Index, you need to specify the name of a Secondary Index with index_name key in the query hash. All other rules remain the same as when querying with a Primary Index.

Suppose you now need to search users registered on a particular day and whose first name start with jon. A SQL equivalent query would be:

select users.* where users.created_at = ' xxx ' and users.first_name = 'jon%'

A Global Secondary Index with the name created_at_first_name_index created while the table is created can be used to to perform this query.

# models/user.rb
def self.first_name_starts_with(name)
  if name
    begin
      query_params = {
        table_name: table_name,
        index_name: 'created_at_first_name_index'
        expression_attribute_names: {
          '#hash_key_name' => 'created_at',
          '#range_key_name' => 'first_name'
        },
        expression_attribute_values: {
          ':hash_key_val' => DateTime.current,
          ':range_key_val' => 'jon'
        },
        key_condition_expression: '#hash_key_name = :hash_key_val AND begins_with(#range_key_name, :range_key_val) '
      }

      $ddb.query(query_params).items  
    rescue Aws::DynamoDB::Errors::ServiceError => e
      puts e.message
      nil
    end
  else
    nil
  end
end

The above method is very similar to the #find_by_email method and returns the result of items matching users created today and whose name start with jon.

Since you need to access all users created on a certain date, avoid using the limit option. However, in most of the queries, you will want to limit the number of items returned. Otherwise, in certain situations, such queries will turn out to be very expensive, as read capacity units will be consumed if the size of the returned items is large. After read capacity units are consumed, your query operations will be throttled.

A query operation is used to create finder methods such as #find, #find_by, and class methods or scopes with query logic that fetches items from database to Rails models. Additionally, you will need to use filters and Local and Global Secondary Indexes to query efficiently.

Scan Operation

Similar to a query operation, a scan operation can be performed against a table with either a Primary or Secondary Index. But unlike a query operation, a scan operation returns all items from the table. A scan operation is useful when you need to access most of the items for a table and do not need to filter out a large amount of data.

A scan operation will read all the items from a table sequentially. It will consume read capacity for an index very quickly. AWS recommends limiting scan operations, otherwise read capacity units will be consumed very quickly and this will affect performance.

If you want to get all users whose name start with jon regardless of the date the users were created, a scan operation can be used.

# models/user.rb
def self.all_users_with_first_name_as(name)
  if name
    begin
      scan_params = {
        table_name: table_name,
        expression_attribute_names: {
          '#range_key_name' => 'first_name'
        },
        expression_attribute_values: {
          ':range_key_val' => name
        }
        filter_expression: 'begins_with(#range_key_name, :range_key_val)'
      }
      $ddb.scan(scan_params).items  
    rescue Aws::DynamoDB::Errors::ServiceError => e
      puts e.message
      nil
    end
  else
    nil
  end
end

Instead of a key condition expression, a filter expression is used to fetch users. table_name, expression_attribute_names, and expression_attribute_values are similar to a query operation. A filter expression can be used with a query operation in a similar manner.

Filter expressions slow down query response time if a large amount of data needs to be filtered. However, I prefer to use filter expressions instead of filtering out data in Ruby with custom logic, as working with large data is memory-consuming and slower.

!Sign up for a free Codeship Account

Extending a Query and Scan Operation with Pagination

A query and scan operation returns a maximum 1 MB of data in a single operation. The result set contains the last_evaluated_key field. If more data is available for the operation, this key contains information about the last evaluated key. Otherwise, the key remains empty.

You need to check if the last_evaluated_key attribute is present, and construct the query again, setting the value of last_evaluated_key from response to exclusive_start_key of the query_params hash. This will tell DynamoDB to start returning items from the given exclusive_start_key, from where the last response returned.

This process needs to be continued until last_evaluated_key is empty.

This is essential, as you never want to return an incomplete response to a user. Besides, pagination is needed in both query and scan operations. So we will move actual querying to the #query_paginated method, which will be used by finder methods.

# models/user.rb
def self.query_paginated(query_params, operation_type = 'query')
  raise Exception, "Invalid Operation Type, #{operation_type}" unless ['query', 'scan'].include?(operation_type)

  items = []
  begin
    loop do
      result = $ddb.public_send(operation_type, query_params)
      items << result.items
      items.flatten!

      if results.last_evaluated_key.present?
        query_params[:exclusive_start_key] = result.last_evaluated_key
      else
        break
      end
    end

    return items.flatten
  rescue Aws::DynamoDB::Errors::ServiceError => e
    puts e.message
    return []
  end
end

def self.first_name_starts_with(name)
  ...
  # replace $ddb.query(query_params).items with query_paginated method call as below
  query_paginated(query_params, 'query')
  ...
end

def self.all_users_with_first_name_as(name)
  ...
  # replace $ddb.scan(scan_params).items with query_paginated method call as below
  query_paginated(scan_params, 'scan')
  ...
end

Read Consistency for Query and Scan

DynamoDB replicates data across multiple availablility zones in the region to provide an inexpensive, low-latency network.

When your application writes data to a DynamoDB table and receives an HTTP 200 response (OK), all copies of the data are updated. The data will eventually be consistent across all storage locations, usually within one second or less. – AWS DynamoDB

Eventual consistent read

Performing an Eventual Read operation means the result might not reflect the result of a recently completed write operation.

Strongly consistent read

Performing a Strongly Consistent Read ensures you always receive recently updated data.

Capacity units with query and scan

Every read operation (query, scan, GetItem, BatchGetItem) consumes read capacity units. The capacity units determine how much you will be charged for a particular operation.

By defining your throughput capacity in advance, DynamoDB can reserve the necessary resources to meet the read and write activity your application requires, while ensuring consistent, low-latency performance. – AWS DynamoDB

If a query or scan operation is performed on a table or Local Secondary Index, read capacity units specified for that table are consumed. When an operation is performed on a Global Secondary Index, read capacity units specified for GSI are consumed.

By default, an Eventual Consistent Read is used in scan and query operations. You can specify that you want a Strongly Consistent Read by specifying consistent_read as true. A Strongly Consistent Read operation consumes twice the read capacity units as an Eventual Consistent Read.

Read capacity units allocated for a table and GSI are evenly spread across all partitions. A sudden burst in read requests for a particular item, or multiple items in the same partition, easily consumes capacity units for a given partition. This results in request throttling.

This is known as a Hot Key problem. A Partition Key must be selected in a way so as to avoid this kind of situation. This kind of sudden burst should also be avoided.

One way to avoid such a sudden burst is to request data in batches along with pagination. Instead of requesting 20_000 items in single request, it would be better to request the items in four requests of 5_000 items each. This situation will be explained in detail in the next article in this series.

Conclusion

This article explains how to work with query and scan operations and when to use which operation. Pagination is very different than SQL databases, and ensuring that you receive complete data is quite important.

It’s evident that tables in DynamoDB must be designed by keeping data access patterns and application requirements in mind. Chosen Primary Indexes and Secondary Indexes determine how flexibly you can query the data. The indexes also determine how capacity units will be consumed, which in turn affects cost.

Read the DynamoDB Developer Guide’s “Working with Queries” and “Working with Scan” section to understand about querying concepts in detail.

The next article will focus on the partitioning behavior of DynamoDB, as well as different strategies for choosing indexes.


“Querying and Pagination with DynamoDB” via @ParthModi08
Click To Tweet


The post Querying and Pagination with DynamoDB appeared first on via @codeship.

Let’s Talk About Shell Scripting

Reading Time: 8 minutesBash is a command-line shell now available on all major operating systems, and it is the environment from which most developers can accomplish the majority of tasks for a system. Many of the commands that need to be executed to complete a task can be grouped together in a sc

Reading Time: 8 minutes

Bash is a command-line shell now available on all major operating systems, and it is the environment from which most developers can accomplish the majority of tasks for a system. Many of the commands that need to be executed to complete a task can be grouped together in a script to help avoid repetition when typing the commands. Furthermore, there’s a good amount of programming capability in shell scripting that allows you to write simple to complex programs.

I’ll be covering some basics in Bash scripting, as well as some more advanced techniques you can take advantage of. I’ll also be covering a bit of fish shell and why you may want to consider using it as a better tool for your command-line experience.


“You may want to consider using fish shell to improve your command-line experience.” via @6ftdan
Click To Tweet


Bash Scripts

Bash scripts are primarily written as a text file of commands that can be executed. They can end in the .sh extension or no extension at all. They can be executed with the Bash command preceding it — bash
myscript.sh
— or by having the file mode set as executable and placing the path to Bash’s executable at the beginning as such:

#!/bin/bash

Then you can execute that script by giving the path and file name ./myscript.sh.

In Bash, comments are created with the pound symbol. For the sake of this post, I will be writing # Output: inline in cases where it will be more helpful.

Setting values in a script is fairly simple.

value="Hello World!"

echo value      # Output: value
echo $value     # Output: Hello World!
echo ${value}   # Output: Hello World!
echo "value"    # Output: value
echo "$value"   # Output: Hello World!
echo "${value}" # Output: Hello World!

As you can see above, even though the variable value was passed, it wasn’t interpreted as a variable unless we preceded it with the dollar sign. Also notice that there are no spaces in value="Hello World!". This is very important in Bash as many things won’t work if the spaces aren’t as they need to be.

Conditional checks in Bash are pretty straightforward as well.

value=0

if [ $value == 0 ]; then
  echo "Zero"
else
  echo "Not Zero"
fi
# Output: Zero

The if syntax is a bit strange but not too difficult to learn. Note the space inside of each outer edge square bracket for the if condition. This is important or the script won’t work. In Bash, the semicolon is the equivalent of a new-line for your code. You could also place then on the next line instead of using a semicolon.

For a second condition statement, you can use elif for else if. You will still need to follow the same ; then pattern for that.

Writing functions in Bash is very simple.

cow() {
  echo Moo
}

cow # Output: Moo

In bash, parameters passed from the console or to a function take a dollar number format, where the number indicates which position of the parameter it is.

cow_eat() {
  echo Cow chews $1
}

cow_eat grass # Output: Cow chews grass

Bash loads a .bashrc file from your home directory for all its defaults. You can place as many functions as you want in there and they will be available on the command line at any time as if they’re a program on your system.

Now let’s jump into a more advanced Bash script and cover its details.

#!/bin/bash
#
# Egg Timer - for taking periodic breaks from desk work
#

limit=45
summary="The mind becomes better with movement"
endmessage="Take a break! Take a short stroll."
echo -n $limit

sleeper() {
  number=$1
  clock=60
  while [ $clock != 0 ]; do
    let "clock = $clock - 1"
    [ $((number%2)) -eq 0 ] && echo -n '.' || echo -n '*'
    sleep 1
  done
}

while true; do
  counter=0
  while [ $counter != $limit ]; do
    sleeper $counter
    let "counter = $counter + 1"
    
    printf '\r%2d' $(($limit - $counter))
  done
  if [ $counter == $limit ]; then
    echo
    notify-send -u critical -i appointment "$summary" "$endmessage"
    echo -e '\a' >&2
    xdg-open https://www.youtube.com/watch?v=Hj0jzepk0WA
  fi
done

This egg timer script runs a countdown clock of 45 minutes, after which it will run three commands.

The first is a desktop notification for Linux systems with the text given to remind you of the importance of taking a break. The next command will make the system beep if it’s supported (it’s a very old-school system command), and the last command will open the default web browser in Linux and play a series of Rocky Balboa motivational workout music scenes from YouTube.

These commands can be changed to whatever is native for your particular operating system to achieve the same result.

Let’s go briefly over a few new things this script introduces. The while loop is like if, except it uses do instead of then and closes with done instead of fi. Variable assignment can also be done with let and a string which permits spaces.

The line with number%2 in it is the Bash way of implementing ternary operation where the value after && is what is executed when the first block is evaluated as true and the code after || is what is executed if the block evaluates to false.

The -eq 0 within that same block is an equality operator of test. The square brackets for conditional situations are the equivalent of using the test command in Bash. You can look up what conditions are available for that by typing man test.

The $(()) in the printf line allows for arithmetic expansion and evaluation in the script.

Parallel Execution

Computer technology has reached a peak as far as single core speeds can achieve. So today, we are adding more cores for more added power.

But the programs we write and have been used to writing are largely written for a single core in the CPU. One of the ways we can take advantage of more cores is by simply running tasks in parallel. In Bash, we have the xargs command that will allow us to execute tasks up to the amount of cores our system has.

Here’s a simple example script of writing 10 values:

value=0
while [ $value -lt 10 ]; do
  value=$(($value + 1))
  echo $value
done

In Bash, we can pipe the output of any command into the streaming input of the next with the pipe operator |. xargs allows that stream to be used as regular command-line input rather than a stream. When run, the above script will print the numbers 1 to 10 in order. But if we split the work into a parallel workload across the system’s cores, the order will vary.

# Command
bash example.sh | xargs -r -P 4 -I VALUE bash -c "echo VALUE"

# Output
2
1
4
3
6
5
7
8
9
10

The -r option on xargs says to not do anything if the input it is given is empty. The -P 4 tells xargs up to how many CPU cores we’re going to utilize in parallel. The -I VALUE tells xargs what string to substitute from the following command with the input passed in from the pipe operator. The -c parameter we’re handing to the Bash command says to run the following quoted string as a Bash command.

If only one instance of xargs is running, it will use up to the maximum amount of CPU cores you have available at a pretty good performance improvement. If you run xargs in multiple shells this way and they collectively are trying to use more cores than you have, you will eliminate nearly all your performance improvement.

Here’s a real world example:

crunch 6 6 abcdef0123456789 --stdout | \
xargs -r -P 4 -I PASSWORD bash -c \
"! aescrypt -d -p 'PASSWORD' -o 'PASSWORD' encrypted_file.aes \
2>/dev/null; if [[ -s \"PASSWORD\" ]]; then exit 255; fi"

The crunch command is a tool that will allow you to iterate through every possible character sequence for a given length and character set. aescrypt is a command-line encryption/decryption tool for files. What the above is good for is trying to reopen your encrypted file with the forgotten password when you know that the password is six characters long and is only in lowercase hexadecimal.

This password length would be simple to crack on most any system. Once you go beyond eight characters in length and with greater variety, this becomes a much less plausible solution for decrypting your forgotten password data as the amount of time to attempt the solutions rises exponentially.

The Bash code in this example is for handling exit statuses and verifying whether an output file was created. The bang ! tells it to ignore the failing exit status and allows xargs to continue on. The -s flag in test simply checks the truthiness of whether a file by that name exists in the current directory. If the file does exist, then we raise a failing exit status to cease any more work with crunch and xargs.

!Sign up for a free Codeship Account

fish shell

Bash has been around for a very long time. There are a bunch of alternative shells available to Bash. fish shell is one that I find quite enjoyable. There can be many advantages to switching shells as newer shells are built with the lessons learned from the old ones and are better in many ways.

It features a frontend configuration tool that you can access via your web browser. It also provides a cleaner scripting language, beautiful autocompletion generated from your system’s man pages, a dynamic shell display depending on your own script configuration, and it’s quite colorful.

Continuing from the forgotten password example, let’s say you remembered a handful of words that made up the password but you don’t remember the order. So you write your own Ruby script to permute the possibilities and print each one out to the command line. You could then write a fish shell script to process each line like:

for password in (ruby script.rb)
  if test -s the_file
    break
  else
    aescrypt -d -p $password the_file.aes 2>/dev/null
  end
end

This looks a lot more like the scripting languages we use and love every day. And you can write out individual fish shell functions in a specific functions directory as an improvement for your command-line tool set. Here’s my fish shell function for pulling GitHub pull requests.

function git-pr
  set -l id $argv[1]
  if test -z $id
    echo "Need Pull request number as argument"
    return 1
  end
  git fetch origin pull/$id/head:pr_$id
  git checkout pr_$id
end

The fish shell requires commands from the command line to be indexed from $argv. One really nice thing about this scripting language is it lets you declare the scope for variables where set -l id above sets the id variable as a locally scoped variable. You’ll notice the $id can be placed in any of the following commands, allowing them to substitute the variable number into those commands as is. So if I run git-pr 77, it will pull and checkout the PR #77 from the project of the directory I reside in.

Summary

The world of shell scripting is a vastly large, as you can see in the Advanced Bash-Scripting Guide. I hope that you’ve been enlightened to new possibilities with shell scripting.

Of course, while Bash is powerful and stable, it is quite old and has its own headaches. Newer shells such as fish shell overcome many of those headaches and help make our systems experiences more enjoyable. Despite its age, Bash is one of those predominant things that you’ll likely need to master, so if it’s feasible for you, I recommend checking out some of the other shells available.


“Let’s Talk About Shell Scripting” via @6ftdan
Click To Tweet


The post Let’s Talk About Shell Scripting appeared first on via @codeship.

Integrating Ruby on Rails Static Analysis with Codeship

Reading Time: 5 minutesEvery development team has their own preferences for what kind of checks they want to run on newly committed code. One specific flavor of tools that developers use is static analysis. Static analysis tools are programs that preform checks “statically” on code. This means tha

Reading Time: 5 minutes

Every development team has their own preferences for what kind of checks they want to run on newly committed code. One specific flavor of tools that developers use is static analysis. Static analysis tools are programs that preform checks “statically” on code. This means that they determine the correctness or validity of code without executing it.

In nonprogramming terms, static analysis is a lot like checking a sentence for errors before speaking it out loud. There is a wide range of problems and insights that static analysis can discover. However, since these tools don’t execute our code, they can’t catch any error that’s unique to the execution runtime. In the same way that we check sentences for grammatical errors, some problems don’t become apparent until we speak (or execute) the words in sequence.

Today, I want to run through an example using static analysis on a Ruby on Rails project and integrating those checks into our Codeship CI process.


“Static analysis is a lot like checking a sentence for errors before speaking it out loud.”
Click To Tweet


To accomplish this, we’ll need a few tools:

  1. A source-control connected version of a Ruby on Rails application like this
  2. A Codeship account

We’ll start developing our CI process locally and then look to translating it to a Codeship configuration.

Staying in Style with Rubocop

Rubocop is the first Ruby static analysis tool I want to explore. While there are a variety of purposes that this library covers, I’ve found it’s most commonly used as a means to enforce the Ruby Style Guide in your project.

There’s a lot to be said about having a consistent style across your application. Sandi Metz wrote a really good blog about it recently. One of Sandi’s points that resonated with me was:

Code is read many more times than it is written, which means that the ultimate cost of code is in its reading.

In larger teams of developers, having them create the same kind of consistently readable code is crucial for efficient collaboration. Rubocop helps remind us how our code fits a specific coding style.

We can install Rubocop via:

gem install rubocop

This will give you the ability to run the Rubocop CLI by simply running:

rubocop app

Rubocop will then run some checks that will either come back successful or with errors. We can treat no errors as a “successful” check and a run with any errors as an “unsuccessful” check.

In a sense, our testing and audit process might eventually look something like this:

bundle exec rspec spec # or whatever testing framework you use
rubocop app

As mentioned before, please note that if Rubocop finds any error in its analysis, the build will fail. This might be somewhat annoying as your test suite and code base grows. However, you can configure Rubocop’s preferred ruleset with a .rubocop.yml file.

Having a consistent style and formatting is certainly nice. However, I’d love to talk a bit about some static analysis libraries that dive a bit deeper than just checking for consistent style.

Detecting Code Smells with Reek

Next we’re going to talk about Reek, a static analysis tool that detects code smells in your application.

If you’re not familiar with the idea of code smells, Jeff Atwood (@codinghorror) has an excellent rundown of them. I’ve found Reek to be incredibly useful for helping me understand if my newly created code exhibits some of these smells.

We can install Reek via:

gem install reek

With Reek installed, we can run an audit on our codebase locally. I like to target specific aspects of my application for code smells. Because of this, I tend to just run Reek on my app directory.

You can execute this audit by running:

reek app

Reek will then run some checks that will either come back successful or with errors. We can treat no errors as a “successful” check and a run with any errors as an “unsuccessful” check.

In a sense, our testing and audit process might eventually look something like this:

bundle exec rspec spec # or whatever testing framework you use
reek app

As mentioned before, please note that if Reek finds any error in its analysis, the build will fail. This might be somewhat annoying as your test suite and code base grows. However, you can configure Reek’s preferred ruleset with a .reek file.

We’ve talked about using Rubocop to make our style consistent and Reek to make our code smell better. However, how do we implement these kind of checks on Codeship?

!Sign up for a free Codeship Account

Codeship Setup

To kick things off, log into your Codeship account and navigate to Create New Project. You’ll then need to connect your Source Control provider of choice and point Codeship toward your hosted project.

Once everything is connected, you’ll be presented with the Test deployment pipeline setup. Depending on how complex your CI pipeline is, you may just want to write a custom solution. However, I’ve found that selecting the Ruby on Rails option as a starting point is incredibly helpful.

Our project will have a Test pipeline that looks something like the following:

Setup Commands

rvm use 2.4.0 # Or whatever Ruby version you're using
bundle install
gem install reek # Used to execute Reek
gem install rubocop # Used to execute Rubocop
rake db:create
rake db:migrate

Our setup commands help prepare our application for testing, but we’re also installing Reek and Rubocop outside of our application bundle. This is because we’ll be executing them outside of Bundler in our next step.

Test Commands

bundle exec rspec spec
rubocop app
reek app

With these elements all in place, we now have a CI pipeline that allows us to test and audit our application code with every new push.

Looking to the Future

Using static analysis tools for local development is incredibly useful for getting more information on the code you’ve written. However, adding static analysis checks into your pipeline is even more of a commitment. You might find yourself becoming frustrated by certain failures and smells as your codebase grows. Yet the benefit of having somewhat of a “cleaner” codebase will benefit you immensely in the future.


“Integrating Ruby on Rails Code Analysis with Codeship” via @hiimtaylorjones
Click To Tweet


The post Integrating Ruby on Rails Static Analysis with Codeship appeared first on via @codeship.

Visual Testing with Percy and Codeship Pro

Reading Time: 3 minutesAt Codeship, we’re pleased to be able to integrate with several third-party products to make your CI/CD workflows that much smoother. We’ve already discussed integrating Percy, a visual testing platform, with your Codeship Basic account. Here’s a brief overview of what you c

Reading Time: 3 minutes

At Codeship, we’re pleased to be able to integrate with several third-party products to make your CI/CD workflows that much smoother. We’ve already discussed integrating Percy, a visual testing platform, with your Codeship Basic account. Here’s a brief overview of what you can accomplish when Codeship Pro and Percy work together.


“How Codeship Pro integrates with visual testing platform Percy” via @codeship
Click To Tweet


Why Percy and Visual Testing

In a nutshell, Percy lets you take screenshots during your test suite and monitor visual changes, as well as get team approval on updates. And in keeping with the spirit of a CI/CD pipeline, it’s all automated.

Percy extends your CI suite and reduces manual QA time by automatically detecting visual changes on every test run. In the same way that CI gives you confidence for every release, the goal of visual testing is to give you confidence in the visual changes going into your application and the UIs that your users see every day. Percy integrates directly into your test suite and Codeship’s parallel test pipelines, providing fast and iterative feedback about visual changes.

Setting Your Percy Variables

To begin, Percy provides two values when you create a new project inside their application:

  • PERCY_TOKEN
  • PERCY_PROJECT

Add them to the encrypted environment variables that you include in your codeship-services.yml file.

Static Sites

To use Percy with static sites inside Docker images on Codeship Pro, install the percy-cli gem inside your images. You can do this either as part of a Gemfile or by adding this command to the Dockerfile:

RUN gem install percy-cli

Note that this will require you to build an image containing both Ruby and RubyGems. If the image doesn’t have both, you won’t be able to install the necessary percy-capybara gem.

From there, add the following command as a step or inside of a script in your codeship-steps.yml file:

- service: your_service
  command: percy snapshot directory_to_snapshot

A couple of notes: you can use multiple commands to take snapshots of multiple directories, and the directories must contain HTML files.

Integrating Percy with Codeship Pro and Ruby

To integrate Percy with Codeship Pro on a Ruby and Docker project, install the percy-capybara gem inside your images. You can do this either as part of a Gemfile or by adding the following command to the Dockerfile:

RUN gem install percy-capybara

As with static sites, the image you’re building must contain both Ruby and RubyGems. Without both, you won’t be able to install the necessary percy-cli gem.

From there, you’ll need to add specific hooks to your Rspec, Capybara, Minitest, or any other test specs you may have. You can find specific integration information for calling Percy from your test specs in the Percy documentation.

These test specs will be called via your codeship-steps.yml file.

Integrating Codeship Pro with Ember

To integrate Percy with Codeship Pro on an Ember and Docker project, install the ember-percy package into your application, typically via your package.json. From there, add specific hooks into your project’s test specs. Again, specific integration information for calling Percy from your test specs is available in the Percy documentation.

These test specs will also be called via your codeship-steps.yml file.

To add visual testing to your CI/CD pipeline, try integrating Percy with your Codeship Pro projects.


“Visual Testing with Percy and Codeship Pro” via @codeship
Click To Tweet


The post Visual Testing with Percy and Codeship Pro appeared first on via @codeship.

{}
Vienna WordPress Meetup ( Feed )
Friday, 11 August 2017
September WordPress Vienna Meetup - Securing WordPress

♦Vienna WordPress Meetup

After the summer we'll dive right into a very important topic - the security of your WordPress website. We're having two great expert talks who you'll also be able to ask further questions after their talks.

Schedule

18:30 | Arrival, Registratio

photoVienna WordPress Meetup

After the summer we'll dive right into a very important topic - the security of your WordPress website. We're having two great expert talks who you'll also be able to ask further questions after their talks.

Schedule

18:30 | Arrival, Registration
19:00 | Welcome & Introduction
19:15 | WordPress Multi-Site HTTPS Migration: a Case Study - Jeremy Chinquist
19:45 | Break
20:00 | Security for WP users - Harry Martin
20:30 | Socialising!
21:00 | Leaving CodeFactory for drinks somewhere closeby

Talks

WordPress Multi-Site HTTPS Migration: a Case Study by Jeremy Chinquist

APA-OTS has a multi-site WordPress instance that serves 10 websites with varying requiresments and use-cases. Several of them are much more than simple blogs based on WordPress. The multi-site HTTPS transition was completed in Q1 2017.

In this case study, the following questions will be addressed:
- How can a WordPress multi-site website be migrated to the HTTPS protocol?
- How can this task be accomplished seamlessly…
…without loss of site traffic?
…without duplicate content (SEO)?
…without loss of site speed?
…with avoiding errors and the common pitfalls?

Security for WP users  by Harry Martin

Learn how simple it is to secure your WP installation and why it is a good idea to do it. The talk is about basic information and closer look to the questions  - Why is security a topic? Where are the risks? What to do for a better security explained as sample with the IThemes Security Plugin.

----


Speakers Wanted!

We are looking for interesting speakers and topics for the next WordPress Meetups in autumn, and winter. Please use our speaker application form if you want to make a presentation (10–30 minutes).

P. S.: The Austrian WordPress-Community meets at Slack

If You want to talk to others of the Austrian WordPress-Community, join us at Slack:

wordpress.slack.com (worldwide Community): register here – first!
Use Your e-mail, which is registered at wordpress.com or wordpress.org
austriawpcommunity.slack.com (Austrian Community)
• dewp.slack.com (german Community)

Information about joining our Slack channel!



Vienna - Austria

Wednesday, September 13 at 6:30 PM

45

https://www.meetup.com/Vienna-WordPress-Meetup/events/239895154/

Scala Meetup - November 2017

♦Scala Vienna User Group

Talks:

1. ???

2. ???

Lightning Talks:

- ???

About the authors:

More details to come.


Vienna - Austria

Thursday, November 16 at 7:00 PM

1

www.meetup.com/scala-vienna/events/2

photoScala Vienna User Group

Talks:

1. ???

2. ???

Lightning Talks:

- ???

About the authors:

More details to come.


Vienna - Austria

Thursday, November 16 at 7:00 PM

1

https://www.meetup.com/scala-vienna/events/242193516/

Working with DynamoDB

Reading Time: 12 minutesRecently, I worked on an IoT-based project where we had to store time-series data coming from multiple sensors and show real-time data to an enduser. Further, we needed to generate reports and analyze gathered information. To deal with the continuous data flowing from the s

Reading Time: 12 minutes

Recently, I worked on an IoT-based project where we had to store time-series data coming from multiple sensors and show real-time data to an enduser. Further, we needed to generate reports and analyze gathered information.

To deal with the continuous data flowing from the sensors, we chose DynamoDB for storage. DynamoDB promises to handle large data with single-digit millisecond latency, at any scale. Since it’s a fully managed database service, we never had to worry about scaling, architecture, and hardware provisioning. Plus, we were using AWS IoT for sensors, so choosing a NoSQL database from AWS services like DynamoDB was the right decision.

Here is what Amazon says about DynamoDB:

Fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.


“Dealing with continuous data flow with DynamoDB” via @parthmodi08
Click To Tweet


The Basics

DynamoDB tables are made of items, which are similar to rows in relational databases, and each item can have more than one attribute. These attributes can be either scalar types or nested.

Everything starts with the Primary Key Index

Each item has one attribute as the Primary Key Index that uniquely identifies the item. The Primary Key is made of two parts: the Partition Key (Hash Key) and the Sort Key (Range Key); where the Range Key is optional. DynamoDB doesn’t just magically spread the data into multiple different servers to boost performance, it relies on partitioning to achieve that.

Partitioning is similar to the concept of sharding seen in MongoDB and other distributed databases where data is spread across different database servers to distribute load across multiple servers to give consistent high performance. Now think of partitions as similar to shards and the Hash Key specified in the Primary Key determines in which shard the item will be stored.

In order to determine in which partition the item will be stored, the Hash Key is passed to a special hash function which ensures that all items are evenly spread across all partitions. This also explains why it is called a Partition Key or Hash Key. The Sort Key on the other hand, determines the order of items being stored and allows DynamoDB to have more than one item with the same Hash Key.

The Sort Key, when present and combined with the Partition Key (Hash Key) forms the Primary Key Index, which is used to uniquely identify a particular item.

This is very useful for time series data such as the price of stocks, where the price of one stock item changes over time and you need to track the price in comparison to the stock. In such cases, the stock name can be the Partition Key and the date can be used as a Range Key to sort data according to time.

Secondary indexes

Primary indexes are useful to identify items and allow us to store infinitely large amounts of data without having to worry about performance or scaling, but soon you will realize that querying data becomes extremely difficult and inefficient.

Having worked with relational databases mostly, querying with DynamoDB was the most confusing aspect. You don’t have joins or views as in relational databases; denormalization helps, but not much.

Secondary indexes in DynamoDB follow the same structure as the Primary Key Index, where one part is the Partition Key and the second part is the Sort Key, which is optional. Two types of secondary indexes are supported by DynamoDB: the Local Secondary Index and the Global Secondary Index.

Local Secondary Index (LSI):

The Local Secondary Index is a data structure that shares the Partition Key defined in the Primary Index, and allows you to define the Sort Key with an attribute other than the one defined in the Primary Index. The Sort Key attribute must be of scalar type.

While creating the LSI, you define attributes to be projected other than the Partition Key and the Sort Key, and the LSI maintains projected attributes along with the Partition Key and Sort Key. The LSI data and the table data for each item is stored inside the same partition.

Global Secondary Index (GSI):

You will need to query data from a different attribute than the Partition Key. You can achieve this by creating a Global Secondary Index for that attribute. GSI follows the same structure as the Primary Key, though it has a different Partition Key than the Primary Index and can optionally have one Sort Key.

Similar to the LSI, attributes to be projected need to be specified while creating the GSI. Both the Partition Key attribute and Sort Key attribute need to be scalar.

You definitely should look up the official documentation for GSI and LSI to understand how indexes work.

!Sign up for a free Codeship Account

Setup for DynamoDB

DynamoDB doesn’t require special setup, as it is a web service fully managed by AWS. You just need API credentials to start working with DynamoDB. There are two primary ways you can interact with DynamoDB, using AWS SDK for Ruby or Dynamoid.

Both libraries are quite good, and Dynamoid offers an Active Record kind of interface. But to get an overview of how DynamoDB works, it’s better to start with AWS SDK for Ruby.

In Gemfile,

gem 'aws-sdk', '~> 2'

First of all, you need to initialize a DynamoDB client, preferably via an initializer so as to avoid instantiating a new client for every request you make to DynamoDB.

# dynamodb_client.rb
$ddb = Aws::DynamoDB::Client.new({
    access_key_id: ENV['AWS_ACCESS_KEY_ID'],
    secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
    region: ENV['AWS_REGION']
})

AWS provides a downloadable version of DynamoDB, ‘DynamoDB local’, which can be used for development and testing. First, download the local version and follow the steps specified in the documentation to set up and run it on a local machine.

To use it, just specify an endpoint in the DynamoDB client initialization hash as shown below:

# dynamodb_client.rb
$ddb = Aws::DynamoDB::Client.new({
    access_key_id: ENV['AWS_ACCESS_KEY_ID'],
    secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
    region: ENV['AWS_REGION'],
    endpoint: 'http://localhost:8000'
})

The DynamoDB local server runs on port 8000 by default.

Put it all together with this simple example

Suppose you need to store the information of users along with shipping addresses for an ecommerce website. The users table will hold information such as first_name, last_name, email, profile_pictures, authentication_tokens, addresses, and much more.

In relational databases, a users table might look like this:

And addresses and authentication tokens will need to be placed separately in other tables with the id of a user as Foreign Key:

Addresses:

Authentication Tokens:

In DynamoDB, there is no concept of Foreign Keys and no joins. There are ways to reference data related to an item from another table as we do in relational databases, but it’s not efficient. A better way would be to denormalize data into a single users table. As DynamoDB is a key value store, each item in a users table would look as shown below:

{
  "first_name": "string",
  "last_name": "string",
  "email": "string",
  "created_at": "Date",
  "updated_at": "Date",
  "authentication_tokens": [
    {
      "token": "string",
      "last_used_at": "Date"
    }
  ],
  "addresses": [
    {
      "city": "string",
      "state": "string",
      "country": "string" 
    }
  ]
}

Make email as the Partition Key of the Primary Key and leave the Range Key optional, as each user will have a unique email id, and we definitely need to look up a user having a particular email id.

In the future, you might need to search users with first_name or last_name. This requirement makes first_name and last_name ideal candidates for the Range Key. Additionally, you may want to get users registered on a particular date or updated on a particular date, which can be found with created_at and updated_at fields, making them ideal for the Partition Key in the Global Secondary Index.

For now, we will make one Global Secondary Index (GSI), where created_at will be the Partition Key and first_name will be Range Key, allowing you to run queries like:

select users.* where users.created_at = ' xxx ' and users.first_name starts_with(' xxx ')

Basic CRUD Operations

All logic related to persistence and querying stays in the model, so following that convention, first create the class User and include ActiveModel::Model and ActiveModel::Serialization modules inside the class.

ActiveModel::Model adds callbacks, validations, and an initializer. The main purpose for adding it is to initialize the User model with parameters hash like Active Record does.

ActiveModel::Serialization provides serialization helper methods such as to_json, as_json, and serializable_hash for objects of the User class. After adding these two modules, you can specify the attributes related to the User model with attr_accessor method. At this point, the User model looks like this:

# models/user.rb
class User 
    include include ActiveModel::Model, ActiveModel::Serialization
    
    attr_accessor :first_name, :last_name, :email, :addresses, :authentication_tokens, :created_at, :updated_at
end

You can create User objects with User.new, pass parameters hash, serialize, and deserialize it, but you cannot persist them in DynamoDB. To be able to persist the data, you will need to create a table in DynamoDB and allow the model to know about and access that table.

I prefer to create a migrate_table! class method where I put the logic required for table creation. If the table already exists, it will be recreated, and the application should wait until the table is created as table creation on DynamoDB takes around a few minutes.

# models/user.rb

...
def self.migrate_table
      $ddb.delete_table(table_name: table_name) if $ddb.list_tables.table_names.include?(table_name)
      create_table_params = {
        table_name: table_name,
        # array of attributes name and their type that describe schema for the Table and Indexes
        attribute_definitions: [
          {
            attribute_name: "first_name",
            attribute_type: "S"
          },
          {
            attribute_name: "created_at",
            attribute_type: "S"
          },
          {
            attribute_name: "email",
            attribute_type: "S"
          },
        ],

        # key_schema specifies the attributes that make up the primary key for a table
        # HASH - specifies Partition Key
        # RANGE - specifies Range Key
        # key_type can be either HASH or RANGE
        key_schema: [
          {
            attribute_name: "email",
            key_type: "HASH",
          }
        ],

        # global_secondary_indexes array specifies one or more keys that makes up index,
        # with name of index and provisioned throughput for global secondary indexes
        global_secondary_indexes: [

          index_name: "created_at_first_name_index",
          key_schema: [
              {
                  attribute_name: "created_at",
                  key_type: "HASH"
              },
              {
                  attribute_name: "first_name",
                  key_type: "RANGE"
              }
          ],

          # Projection - Specifies attributes that are copied (projected) from the table into the index.
          # Allowed values are - ALL, INCLUDE, KEYS_ONLY
          # KEYS_ONLY - only the index and primary keys are projected into the index.
          # ALL - All of the table attributes are projected into the index.
          # INCLUDE - Only the specified table attributes are projected into the index. The list of projected attributes are then needs to be specified in non_key_attributes array
          projection: {
              projection_type: "ALL"
          },

          # Represents the provisioned throughput settings for specified index.
          provisioned_throughput: {
              read_capacity_units: 1,
              write_capacity_units: 1
          }
        ],

        # Represents the provisioned throughput settings for specified table.
        provisioned_throughput: {
          read_capacity_units: 1,
          write_capacity_units: 1,
        }
      }
      $ddb.create_table(create_table_params)

      # wait till table is created
      $ddb.wait_until(:table_exists, {table_name: "movies"})
    end
    
...

Creating an Item

DynamoDB provides the #put_item method, which creates a new item with passed attributes and the Primary Key. If an item with the same Primary Key exists, it is replaced with new item attributes.

# models/user.rb
class User
    ...
    def save
        item_hash = instance_values
        begin
          resp = $ddb.put_item({
            table_name: self.table_name,
            item: item_hash,
            return_values: 'NONE'
          })
          resp.successful?
        rescue Aws::DynamoDB::Errors::ServiceError => e
          false
        end
    end
    ...
end

The instance method save simply saves an item and returns either true or false depending upon the response. The instance_values method returns hash of all the attr_accessor fields, which is passed to the item key as item_hash.

The return_values option inside the put_item request determines whether you want to receive a saved item or not. We are just interested in knowing whether an item is saved successfully or not, hence ‘NONE’ was passed.

Reading an item

Getting an item from DynamoDB with the Primary Key is similar to the way records are found by ids by Active Record in relational databases.

The #get_item method is used to fetch a single item for a given Primary Key. If no item is found, then the method returns nil in the item’s element of response.

# models/user.rb
...
def self.find(email)
    if email.present?
      begin
        resp = $ddb.get_item({
          table_name: self.table_name,
          key: {
            email: email
          }
        })
      resp.item
      rescue Aws::DynamoDB::Errors::ServiceError => e
        nil
      end
    else
      nil
    end
  end
 ...

Updating an item

An item is updated with the #update_item method, which behaves more like the upsert (update or insert) of PostgreSQL. In other words, it updates an item with given attributes. But in case no item is found with those attributes, a new item is created. This might sound similar to how #put_item works, but the difference is that #put_item replaces an existing item, whereas #update_item updates an existing item.

# models/user.rb
  def update(attrs)
    item_hash = attrs
    item_hash['updated_at'] = DateTime.current.to_s
    item_hash.keys.each do |key|
      item_hash[key] = {
        'value' => item_hash[key],
        'action' => 'PUT'
      }
    end
    begin
      resp = $ddb.update_item({
        table_name: self.class.table_name,
        key: {
          email: email
        },
        attribute_updates: item_hash
      })
      resp.successful?
    rescue Aws::DynamoDB::Errors::ServiceError => e
      false
    end
  end

While updating an item, you need to specify the Primary Key of that item and whether you want to replace or add new values to an existing attribute.

Look closely at how item_hash is formed. The attribute hash is passed normally to update the method, which is processed further to add two fields — value and action — in place of simple key => value pairs. This modifies a given simple hash as shown below to attribute_updates key compatible values.

Deleting an item

The #delete_item method deletes items with a specified Primary Key. If the item is not present, it doesn’t return an error.

# models/user.rb
  def delete
    if email.present?
      begin
        resp = $ddb.delete_item({
          table_name: self.class.table_name,
          key: {
            email: email
          }
        })
        resp.successful?
      rescue Aws::DynamoDB::Errors::ServiceError => e
        false
      end
    else
      false
    end
  end

Conditional Writes

All DynamoDB operations can be categorized into two types: read operations, such as get_item, and write operations, such as put_item, update_item, and delete_item.

These write operations can be constrained with specified conditions, such as put_item, and should be performed only if a certain item with the same Primary Key does not exist. All write operations support these kinds of conditional writes.

For example, if you want to create an item only if it isn’t present, you can add attribute_not_exists(attribute_name) as a value of the condition_expression key in the #put_item method params.

def save!
    item_hash = instance_values
    item_hash['updated_at'] = DateTime.current.to_s
    item_hash['created_at'] = DateTime.current.to_s
    begin
      resp = $ddb.put_item({
        table_name: self.class.table_name,
        item: item_hash,
        return_values: 'NONE',
        condition_expression: 'attribute_not_exists(email)'
      })
      resp.successful?
    rescue Aws::DynamoDB::Errors::ServiceError => e
      false
    end
  end

Ruby SDK v2 provides an interface to interact with DynamoDB, and you can read more about the methods provided in the given SDK documentation.

Batch Operations

Apart from four basic CRUD operations, DynamoDB provides two types of batch operations:

  • #batch_get_item – This can be used to read a maximum of 100 items from one or more tables.
    • ie, you can batch up to 100 #get_item calls with a single #batch_get_item.
  • #batch_write_item – This can be used to perform write operations up to a maximum of 25 items in a single #batch_write_item.
    • Batch Write Item can perform any write operation such as #put_item, #update_item, or #delete_item for one or more tables.

You can read more about batch operations in the AWS developer guide.

Query and Scan Operations

Batch operations are not very useful for querying data, so DynamoDB provides Query and Scan for fetching records.

  • Query: Lets you fetch records based on the Partition Key and the Hash Key of the Primary or Secondary Indexes.
    • You need to pass the Partition Key, a single value for that Partition Key, and optionally the Range Key with normal comparison operators (such as =, >, and <) if you want to further narrow down the results.
  • Scan: A Scan operation reads every item in a table or a secondary index, either in ascending or descending order.

Both the Query and Scan operations allow filters to filter the result returned from the operation.

Conclusion

The fully managed NoSQL database service DynamoDB is good if you don’t need to worry about scaling and handling large amounts of data. Even though it promises the high performance of single-digit milliseconds and is infinitely scalable, you need to be careful when designing table architecture and choosing Primary Key and Secondary Indexes, otherwise, you can lose these benefits and costs may go high.

Ruby SDK v2 provides an interface to interact with DynamoDB, and you can read more about methods provided in the given SDK Documentation. Also, the Developer Guide is a perfect place to understand DynamoDB in depth.


“Working with DynamoDB” via @parthmodi08
Click To Tweet


The post Working with DynamoDB appeared first on via @codeship.

Monitoring Your Asynchronous Python Web Applications Using Prometheus

Reading Time: 6 minutesIn my last article, we saw how we can integrate the Prometheus monitoring system with synchronous Python applications. We focused on WSGI applications such as those written in Flask or Django and deployed using uwsgi or gunicorn. In this post, we will discuss integrating Pro

Reading Time: 6 minutes

In my last article, we saw how we can integrate the Prometheus monitoring system with synchronous Python applications. We focused on WSGI applications such as those written in Flask or Django and deployed using uwsgi or gunicorn. In this post, we will discuss integrating Prometheus with asynchronous web applications written using aiohttp, an HTTP client/server framework built upon asyncio.


“Integrating Prometheus with asynchronous web apps written using aiohttp” via @echorand
Click To Tweet


Software Requirements

The sample Python application we will follow along with is tested with Python 3.5+. In addition, we will be using docker (v1.13) and docker-compose (v1.10.0) to run the web application, as well as the other software we will be using. If you don’t have these installed, please follow the official install guide to install these on your operating system.

All the source code we will need to follow along with is in a git repository.

Our Python Web Application

Our Python web application is as follows:

from aiohttp import web

async def test(request):
    return web.Response(text='test')

async def test1(request):
    1/0

if __name__ == '__main__':
    app = web.Application()
    app.router.add_get('/test', test)
    app.router.add_get('/test1', test1)
    web.run_app(app, port=8080)

We expose two HTTP endpoints: test, which simply returns “test”, and test1, which triggers a runtime exception resulting in a 500 Internal Server Error HTTP response. As is typical of web applications, we would want to wrap any unhandled exceptions in a proper error response using a middleware as follows:

@asyncio.coroutine
def error_middleware(app, handler):

    @asyncio.coroutine
    def middleware_handler(request):
        try:
            response = yield from handler(request)
            return response
        except web.HTTPException as ex:
            resp = web.Response(body=str(ex), status=ex.status)
            return resp
        except Exception as ex:
            resp = web.Response(body=str(ex), status=500)
            return resp

    return middleware_handler
...
app = web.Application(middlewares=[error_handler])
..

middleware_handler() gets called before processing the request. Hence we first call the handler with the request (yield from handler(request)) and return the response. If however we get an exception, we perform additional processing.

We have two exception handling blocks to handle two different kinds of exceptions. An exception of type web.HTTPException is raised in cases handled by the server such as as requests that result in HTTP 404 responses. These exception objects have the status attribute set to the corresponding HTTP status code. The other exception block handles any other error condition which hasn’t been handled by the server (such as the ZeroDivisionError raised by the test1 endpoint). In this case, we have to set the status code ourselves and we set it to 500.

If you need to refer to how the web application looks at this stage, see this commit.

At this stage, we have a basic web application written using aiohttp which we can further expand upon as per our requirements using one of the many asyncio-aware libraries. Almost orthogonal to the specific functionality the web application will finally provide, we will want to calculate the basic metrics for a web application and use a monitoring system to aggregate these metrics, which in our case is prometheus.

Exporting Prometheus Metrics

Next, we will modify our web application to export the following metrics:

  • Total number of requests received
  • Latency of a request in seconds
  • Number of requests in progress

Prometheus classifies metrics into different types, which tells us that the metrics of our choice above are a counter, histogram, and a gauge respectively. We will write another middleware to initialize, update, and export these metrics.

First, we will initialize the metric objects and register a handler function for handling requests to the /metrics endpoint:

from prometheus_client import Counter, Gauge, Histogram
import prometheus_client

async def metrics(request):
    resp = web.Response(body=prometheus_client.generate_latest())
    resp.content_type = CONTENT_TYPE_LATEST
    return resp


def setup_metrics(app, app_name):
    app['REQUEST_COUNT'] = Counter(
      'requests_total', 'Total Request Count',
      ['app_name', 'method', 'endpoint', 'http_status']
    )
    app['REQUEST_LATENCY'] = Histogram(
        'request_latency_seconds', 'Request latency',

    app['REQUEST_IN_PROGRESS'] = Gauge(
        'requests_in_progress_total', 'Requests in progress',
        ['app_name', 'endpoint', 'method']
    )

    app.middlewares.insert(0, prom_middleware(app_name))
    app.router.add_get("/metrics", metrics)

The prom_middleware function is where the updating of the metrics is done for every request:

def prom_middleware(app_name):
    @asyncio.coroutine
    def factory(app, handler):
        @asyncio.coroutine
        def middleware_handler(request):
            try:
                request['start_time'] = time.time()
                request.app['REQUEST_IN_PROGRESS'].labels(
                            app_name, request.path, request.method).inc()
                response = yield from handler(request)
                resp_time = time.time() - request['start_time']
                request.app['REQUEST_LATENCY'].labels(app_name, request.path).observe(res
p_time)
                request.app['REQUEST_IN_PROGRESS'].labels(app_name, request.path, request
.method).dec()
                request.app['REQUEST_COUNT'].labels(
                            app_name, request.method, request.path, response.status).inc(
)
                return response
            except Exception as ex:
                raise
        return middleware_handler
    return factory

The web application code will then be modified to call the setup_metrics() function as follows:

setup_metrics(app, "webapp_1")

Note that we insert the prom_middleware() in front of all other registered middlewares in the setup_metrics() function above (app.middlewares.insert(0, prom_middleware(app_name)). This is so that the Prometheus middleware is called after all other middlewares have processed the response.

One reason for this being desirable is when we have also registered an error handler as above, the response will then have the status set correctly when the Prometheus middleware is invoked with the response.

Our web application at this stage looks as per this commit. prom_middleware() and error_middleware() have been moved to the helpers.middleware module.

!Sign up for a free Codeship Account

Running the Application

Let’s now build the Docker image for our web application and run it:

$ cd aiohttp_app_prometheus
$ docker build -t amitsaha/aiohttp_app1 -f Dockerfile.py3 .
..

$ docker run  -ti -p 8080:8080 -v `pwd`/src:/application amitsaha/aiohttp_app1
======== Running on http://0.0.0.0:8080 ========
(Press CTRL+C to quit)

Now, let’s make a few requests to our web application:

$ curl localhost:8080/test
test                                                                           $ curl localhost:8080/test1
division by zero
$ curl loc

In addition, if you send a request to the /metrics endpoint, you will see that you get back a 200 response with the body containing the calculated Prometheus metrics.

Now let’s kill the above container (via Ctrl+C) and use docker-compose to start our web application and the Prometheus server as well using:

$ docker-compose -f docker-compose.yml -f docker-compose-infra.yml up
...

Now, if we make a few requests again as above and then open the Prometheus expression browser at http://localhost:9090 and type in one of the two metrics we exported, requests_total, you will see a graph similar to the following:

At this stage, we have our web application exporting metrics that a Prometheus server can read. We have a single instance of our application capable of serving concurrent requests, thanks to asyncio.

Hence, we haven’t come across the limitation that we came across in Part I, the previous article, where we had to run multiple worker processes to be capable of serving concurrent requests. This hindered getting consistent metrics from the web application, as any of the worker processes could be scraped by Prometheus.

Prometheus Integration in a Production Setup

If we were deploying an aiohttp application in production, we would likely deploy it in a reverse proxy setup via nginx or haproxy. In this case, we will have a setup where requests to our web application first hit nginx or haproxy, which will then forward the request to one of the multiple web application instances.

In this case, we will have to make sure prometheus is scraping each of the web application instances directly rather than via the reverse proxy. The setup would look as follows:

When running on a single host (a virtual machine), the port each instance of the application will listen on will be different. This of course means that the communication between the reverse proxy and the individual aiohttp application instances has to be over HTTP and cannot be over faster Unix sockets.

However, if yourwere deploying the web applications using containers, this would not be an issue since you would run one application instance per container anyway and these application instances would be scraped separately.

Discovering scraping targets via service discovery

Web application instances can come and go, and the number of them running at any given point of time can vary. Hence their IP addresses can vary, and thus we need a mechanism for Prometheus to discover the web applications via a mechanism that doesn’t manually require updating the Prometheus target configuration. A number of service discovery mechanisms are supported.

Conclusion

In this post, we looked at how we can monitor asynchronous Python web applications written using aiohttp. The strategy we discussed should also work for other asynchronous frameworks (such as tornado) that support the deployment model of running multiple instances of the application, with each instance running only one instance of the server process.

We wrote the Prometheus integration middleware by hand in this post. However, I have put up the middleware as a separate package that you can use for your web applications.

Resources


“Monitoring Your Asynchronous Python Web Applications Using Prometheus” via @echorand
Click To Tweet


The post Monitoring Your Asynchronous Python Web Applications Using Prometheus appeared first on via @codeship.

Building a Remote Caching System: The Sequel

Reading Time: 3 minutesLast fall, Docker made some big changes that required us to overhaul how our Codeship Pro image caching system worked. Our director of engineering, Laura Frank, published a blog post explaining everything back when we launched this new system. The gist of this was that Docke

Reading Time: 3 minutes

Last fall, Docker made some big changes that required us to overhaul how our Codeship Pro image caching system worked. Our director of engineering, Laura Frank, published a blog post explaining everything back when we launched this new system.

The gist of this was that Docker no longer allowed images pulled from a remote source to be used as a cache source. This was a security measure to prevent cache poisoning. The workaround to this was to rely on the save and load commands to package up your images and store them on S3 as tarballs. This took a lot more time to do, per image, for a variety of reasons; Laura’s blog post will explain it all in more detail.

Now, though, we’re undoing all of that…because Docker restored the original functionality and made it possible for us to use remote images as a local cache source again.


“Our response to Docker’s restoration of remote caching functionality” via @codeship
Click To Tweet


For you, this means much faster image caching on your builds. For us, it means we’re using an Amazon-backed registry with much less disk space and much less overhead to offer a much better experience and faster build times for Codeship Pro builds! If you can’t tell, we’re very excited about improving the performance of our caching system.

Watch the engineer who’s working on our caching system discuss the update.

Registry-based System Security and Benchmarks

Let’s take a minute to talk about the how and why of these changes a bit more.

What is the new system in more detail?

Essentially, after your builds complete, we push any images you have enabled caching for — using your simple cached: true directive attached to your services, as defined in your codeship-services.yml file — to an Amazon-backed registry we maintain.

These images are set up on registry accounts with credentials unique to your project. No other project has backend access to your cached images, and there is no centrally accessible pool of cached images under a single account.

We’re big fans of deferring security to more complex and larger providers, like AWS, where we can. Rather than host our own registry infrastructure and add a new security apparatus to our infrastructure support operations, we made the call that using AWS for this purpose would provide more reliable security than we could internally compete with at our size.

!Sign up for a free Codeship Account

Let’s look at performance

Speed was the main reason we made this change, and the benchmarks we have indicate that the gains for most builds will be quite substantial. Additionally, because we no longer need to save parent image data and because registries are much smarter at only saving differentials between images than the full tarball we previously had to rely on, you can see it has a huge dent in the disk space the system requires as well.

  • React + Postgres: 40% Faster (6 minutes saved)
  • Rails + Postgres: 22% Faster (4 minutes saved)
  • Node + Postgres: 56% Faster (12 minutes saved)
  • Node + Selenium: 30% Faster (5 minutes saved)

Optimizing for Caching

Before wrapping up, let’s also discuss the internals of how image-based caching works so that you can design your Docker projects to get the most out of a caching system.

Caching is layer-by-layer, just like your Docker images. Every specific command in your Dockerfile generates a new Docker layer. So combining two commands into a single command reduces the layer count by one. Inversely, breaking a single command out into multiple commands adds additional layers.

This is important to keep in mind, because when your cached services are built during your Codeship Pro build run, cached layers can be reused only up to the point of a breaking change. Once we hit a layer that’s different from your last build — let’s say your code has changed — Docker will rebuild the rest of the image from that point on. This kind of architecture has some key design considerations for you as a result:

  • Move breaking changes farther down in your Dockerfile
  • Combining statements can reduce image complexity, if they’re not brittle layers
  • Adding your code should be one of the last things you do in your images
  • Be mindful of dependencies and how dependency changes may invalidate the rest of the cached image

If you’re looking for more information on optimizing your builds to make the best use of caching, you can read our blog post on the topic for more suggestions and examples.


“Building a Remote Caching System: The Sequel” via @codeship
Click To Tweet


The post Building a Remote Caching System: The Sequel appeared first on via @codeship.

Visual Testing with Percy and Codeship Basic

Reading Time: 2 minutesAt Codeship, we’re pleased to be able to integrate with several third-party products to make your CI/CD workflows that much smoother. For example, Percy, a visual testing platform, is one of our integration partners that adds extra functionality to your Codeship CI/CD pipeli

Reading Time: 2 minutes

At Codeship, we’re pleased to be able to integrate with several third-party products to make your CI/CD workflows that much smoother. For example, Percy, a visual testing platform, is one of our integration partners that adds extra functionality to your Codeship CI/CD pipelines. Here’s a brief overview of what you can accomplish when Codeship and Percy work together.

Codeship and Percy

Together, Codeship and Percy make a powerful combination — giving you full confidence in your app at every point of your development lifecycle.

Percy has first-class support for Codeship, supporting both Codeship Basic and Codeship Pro and automatically working with Codeship’s parallel test pipelines.

Visual Testing at a Glance

In a nutshell, Percy lets you take screenshots during your test suite and monitor visual changes, as well as get team approval on updates. And in keeping with the spirit of a CI/CD pipeline, it’s all automated.

Percy adds visual reviews to your Codeship tests and GitHub pull requests, helping you see every visual change to your application and catch UI bugs before they’re shipped. You can approve visual changes in one click as part of your team’s code review process.

Getting Started with Percy

To begin, you’ll need to add the two values to your project’s environment variables. Percy provides these when you create a new project inside their application:

  • PERCY_TOKEN
  • PERCY_PROJECT

Find them by navigating to Project Settings and clicking the Environment tab.

Integrating Percy with Codeship Basic

Let’s discuss a few examples of how to get started with Codeship and Percy and technologies like Ruby or Ember.

Codeship, Percy, and static sites

To use Percy with static sites on Codeship Basic, install the percy-cli gem. You can do this either in your setup commands or in the Gemfile itself. Install the gem with this command:

gem install percy-cli

Next, simply add the following command to your test commands:

percy snapshot directory_to_snapshot

You can use multiple commands to take snapshots of multiple directories. Note that the directories must contain HTML files.

Codeship, Percy, and Ruby

Want to integrate Percy with Codeship Basic on a Ruby project? Start by installing the percy-capybara gem in either your setup commands or your Gemfile. Install the gem with this command:

gem install percy-capybara

Finally, you need to add specific hooks to whatever test specs you may have (Rspec, Capybara, Minitest, etc). Percy’s documentation includes integration information for calling Percy from whatever test specs you’ve got.

Codeship, Percy, and Ember

Of course, we haven’t forgotten about Ember. For Ember projects, install the ember-percy package by adding the following to your setup commands:

ember install ember-percy

Now, simply add specific hooks into your test specs, which you can find in Percy’s documentation.

Conclusion

To add visual testing to your CI/CD pipeline, consider integrating Percy with your Codeship Basic projects. Interested in other third-party integrations for Codeship? We’ve got you covered with our integrations portal right here.


“Visual Testing with Percy and Codeship Basic” via @codeship
Click To Tweet


The post Visual Testing with Percy and Codeship Basic appeared first on via @codeship.

4 Ways to Secure Your Authentication System in Rails

Reading Time: 9 minutesThis article was originally published on Duck Type Labs by Sid Krishnan. With his kind permission, we’re sharing it here for Codeship readers. Authentication frameworks in Ruby on Rails can be somewhat of a contentious topic. Take Devise, one of the more popular options, for

Reading Time: 9 minutes

This article was originally published on Duck Type Labs by Sid Krishnan. With his kind permission, we’re sharing it here for Codeship readers.

Authentication frameworks in Ruby on Rails can be somewhat of a contentious topic. Take Devise, one of the more popular options, for example. Critics of Devise point out, perhaps rightly so, that there is a lack of clear documentation, that it is hard to understand, hard to customize, and wonder if we wouldn’t be better off using a different gem or even rolling our own custom authentication system. Advocates of Devise, on the other hand, point out that Devise is the result of years of expert hard work and that by rolling your own, you’d forgo much of the security that comes with using a highly audited piece of software.

If you’re new to this conversation, you might ask yourself why you would even need Devise if, like in the Hartl Rails tutorial or the RailsCast on implementing an authentication system from scratch, you can just use has_secure_password and expire your password reset tokens (generated with SecureRandom) within a short enough time period.

Is there something inherently insecure in the authentication systems described in the Hartl tutorial and the RailsCasts? Should a gem like Devise be used whenever your app needs to authenticate its users? Does using a gem like Devise mean that your authentication is now forever secure?


“Does using a gem mean that your authentication is now forever secure?” via @codeship
Click To Tweet


As in most areas of software, the answer is that it depends. Authentication does not exist in isolation from the rest of your app and infrastructure. This can mean that even if your authentication system is reasonably secure, weaknesses in other areas of your app can lead to your users being compromised.

Conversely, you might be able to get away with less-than-optimum security in your authentication system if the rest of your app and infrastructure pick up the slack.

The only way you, an app developer, can answer these questions satisfactorily is to deepen your understanding of security and authentication in general. This will help you as you make the tradeoffs that will inevitably arise when you are building and/or securing your app.

Whether you want to use Devise (or a similar third-party gem like Clearance) or roll your own auth, here are four specific ways you can make authentication in your app more secure. Though your mileage may vary, I hope at the very least one of them gives you something to think about.

Throttle Requests

The easiest thing an attacker can do to compromise your users is to guess their login credentials with a script. Users won’t always choose good passwords, so given enough time, an attacker’s script will likely be able to compromise a significant number of your users just by making a large number of guesses based on password lists.

You can make it hard for attackers to do this by restricting the type and number of requests that can be made to your app within a predefined time period. The gem rack-attack, for example, is a Rack middleware that gives you a convenient DSL with which you can block and throttle requests.

Let’s say you just implemented the Hartl tutorial and you now want to add some throttling. You might do something like this after installing rack-attack:

throttle('req/ip', :limit => 300, :period => 5.minutes) do |req|
  req.ip
end

The above piece of code tells Rack::Attack to limit any given IP to at most 300 total requests every five-minute period. You’ll notice that since the block above receives a request object, we can technically throttle requests based on any arbitrary request parameter.

While this limits a single IP from making too many requests, attackers can get around this by using multiple IPs to bruteforce a user’s account. To slow this type of attack down, you might consider throttling requests per account. For example:

throttle("logins/email", :limit => 5, :period => 20.seconds) do |req|
  if req.path == '/login' && req.post?
    req.params['email'].presence # this will return the email if present, and nil otherwise
  end
end

If you’re using Devise, you also have the option to “lock” accounts if there are too many unsuccessful attempts to log in. You can also implement a lockout feature by hand.

However, there is a flip side to locking and/or throttling on a per-account basis; attackers can now restrict access to arbitrary user accounts by forcing them to lock out, which is a form of a “Denial of Service” attack. In this case, there is no easy answer. When it comes to choosing security mechanisms for your app, you’ll have to decide how much and what type of risk you’re willing to take on. Here is an old StackExchange post that might be a good starting point for further research.

A note about Rack::Attack and DDoS attacks

Before actually implementing a throttle, you’ll want to use Rack::Attack‘s “track” feature to get a better idea of what traffic looks like to your web server. This will help you make a more informed decision about throttling parameters. Aaron Suggs, the creator of Rack::Attack, says it is a complement to other server-level security measures like iptables, nginx limit_conn_zone, and others.

DoS and DDoS attacks are vast topics in their own right, so I’d encourage you to dig deeper if you’re interested. I’d also recommend looking into setting up a service like Cloudflare to help mitigate DDoS attacks.

Set Up Your Security Headers Correctly

Even though you probably serve requests over HTTPS, you might be vulnerable to a particular type of Man in the Middle attack known as SSL Stripping.

To illustrate, imagine you lived in Canada and bank with the Bank of Montreal. One day, you decide to email money to someone and type bmo.com into the address bar in Chrome. If you had dev tools open and were on the Network tab, you’d notice that the first request to bmo.com is made via HTTP, instead of HTTPS. A suitably situated attacker could have intercepted this request and begun to serve you a spoofed version of the BMO website (making you divulge login information and what not), and they’d have been able to do this only because your browser used HTTP, instead of HTTPS.

The Strict-Transport-Security header is meant to prevent this type of attack. By including the Strict-Transport-Security header (aka the HTTP Strict Transport Security or HSTS header) in its response, a server can tell a browser to only communicate with it via HTTPS.

The HSTS header usually specifies a max-age parameter and the browser equates the value of this parameter to how long it should use HTTPS to communicate with the server. So, if max-age was set to 31536000, which means one year, the browser would only communicate with the server via HTTPS for a year. The HSTS header also lets your server specify if it wants the browser to talk via HTTPS on its subdomains as well. See here and here for further reading.

To make this happen in Rails, do config.force_ssl = true. This will ensure that the HSTS header is set to a value of 180.days. To apply HTTPS to all your subdomains, you can do config.ssl_options = {hsts: {subdomains: true}}.

The loophole in this is that the first ever request to the server might still be made via HTTP. The HSTS header protects all requests except the first one. There is a way to have the browser always use HTTPS for your site, even before the user has actually visited, and that is by submitting your domain to be included in the Chromium preload list. The “disadvantage” with the preload approach is that you will never ever be able to serve info via HTTP on your domain.

Having HSTS enabled doesn’t mean your users will be absolutely safe, but I’d wager they’d be considerably safer with it than without.

If you’re curious, you can quickly check what security-related headers (of which HSTS is one) your server responds with on securityheaders.io. I advise looking into all the headers here to decide if they apply to your situation or not.

!Sign up for a free Codeship Account

Read Authentication Libraries

Reading authentication libraries (Devise, Authlogic, Clearance, Rodauth, and anything else you have access to) especially applies if you’re rolling your own, but even if you don’t, you can learn a lot from how another gem does a similar thing.

You don’t always have to read the source code itself to learn. The change logs and update blog posts from maintainers can be just as informative because they often go into detail about vulnerabilities that were discovered and the steps taken to mitigate them.

Here are three things I learned from Rodauth and Devise that you might find intriguing:

Restricted password hash access (Rodauth)

Unlike Devise and most “roll your own auth” examples, Rodauth uses a completely separate table to store password hashes, and this table is not accessible to the rest of the application.

Rodauth does this by setting up two database accounts: app and ph. Password hashes are stored in a table that only ph has access to, and app is given access to a database function that uses ph to check if a password hash matches a given account.

This way, even if an SQL injection vulnerability exists in your app, an attacker will not be able to directly access your users’ password hashes.

User specific tokens (Rodauth)

Rodauth not only stores password reset and other sensitive tokens in separate tables, it also prepends every token with an account ID.

Imagine your Forgot Password link looked something like this: www.example.com/reset_password?reset_password_token=abcd1234

If an attacker was trying to guess a valid token, their guess could potentially be a valid token for any user. If we prepend the token with an account ID (maybe the token looks like reset_password_token=<account_id>-abcd1234), then the attacker can only attempt to bruteforce their way into one user account at a time.

Digesting tokens (Devise)

Since version 3.1 came out a few years ago, Divise digests password reset, confirmation, and unlock tokens before storing them in the database. It does so by first generating a token with SecureRandom and then digesting it with the OpenSSL::HMAC.hexdigest method.

In addition to protecting the tokens from being read in the event that an attacker is able to access the database, digesting tokens in this manner also protects them against timing attacks. It would be near impossible for an attacker to control the string being compared enough to make byte-by-byte changes.

If you want to know more about Rodauth, check out their GitHub page and also watch this talk by Jeremy Evans, its creator.

The more you know about how other popular authentication frameworks approach authentication and the steps they take to avoid being vulnerable to attack, the more confident you can be in assessing the security of your own authentication set up.

Secure the Rest of Your App

Authentication does not stand in isolation. Vulnerabilities in the rest of your app have the potential to bypass any security measures you might have built into your authentication system.

Let’s consider a Rails app with a Cross Site Scripting (XSS) vulnerability to illustrate.

Imagine the XSS vulnerability exists because there’s an html_safe in the codebase somewhere that unfortunately takes in a user input. Now, because our app is a Rails (4+) app, we have the httpOnly flag set on our cookie by default, which means any JavaScript an attacker is able to inject won’t have access to document.cookie.

Though it might seem like our app is safe from session hijacking attacks, our attacker can still do a bunch of things that compromise a user’s session. For example, they can inject JavaScript that makes an AJAX request to change the user’s password. If the password change form requires the current password, they can try to change the user’s email (to their own) and initiate a password reset flow.

In short, an XSS vulnerability sort of makes it irrelevant how secure your authentication is, and the same can be said of other vulnerabilities like Path Traversal or CSRF.

Learning about security vulnerabilities and then applying that knowledge to attack your own app is a great way to, over the long term, write more secure code. I’d also encourage you to read through resources like the Rails Security Guide and security checklists like this and this.

Conclusion

The above list is not meant to to be comprehensive, and what I’ve left out can probably fill multiple books. However, I hope I’ve given you a few things to think about and that you’re able to take away at least one thing that will make your app more secure.

Shameless Plug: If you’re within a few hours’ flight from Toronto, I’d love to come talk to you and your team about security, free of charge. Get in touch with me at sidk(AT)ducktypelabs(DOT)com for more info.

I’d love to hear from you! Post in the comments section below. What do you do to secure your auth?


“4 Ways to Secure Your Authentication System in Rails” via @codeship
Click To Tweet


The post 4 Ways to Secure Your Authentication System in Rails appeared first on via @codeship.

Scala Meetup - September 2017

♦Scala Vienna User Group

- Intro + Plans (Oleg Rudenko)

1. Scala.js - Scala in your Browser (Matthias Braun)

- break

- Raffle (2 Manning e-books || IntelliJ Ultimate Edition License)

2. Getting started with gRPC in Scala (Petra Bierleutgeb)

Lightni

photoScala Vienna User Group

- Intro + Plans (Oleg Rudenko)

1. Scala.js - Scala in your Browser (Matthias Braun)

- break

- Raffle (2 Manning e-books || IntelliJ Ultimate Edition License)

2. Getting started with gRPC in Scala (Petra Bierleutgeb)

Lightning Talk: Running Play Applications on Docker (Daniel Pfeiffer)

- networking


About the authors:

- Petra is a freelance software engineer currently working in a project at Starbucks. She <3 Scala, FP and Linux.

- Matthias is a freelancing software engineer with a strong interest in functional web programming, continuous delivery, and design. He enjoys coding from home, reading HP Lovecraft, and vegan cuisine.

- Daniel is a Software Engineer at Firstbird building a SaaS employee referral tool, turning your network into a recruiting engine. He studied Business Informatics at the Vienna University of Technology and has built software professionally for banking, human resources and the public sector.

Vienna - Austria

Thursday, September 14 at 7:00 PM

30

https://www.meetup.com/scala-vienna/events/240059788/

August WordPress Vienna Meetup at the CodeFactory Campus (Brick 5)

♦Vienna WordPress Meetup

This time our host is the coding school “CodeFactory”. They invite us to their campus close to Brick 5, brick-5.at....

As it is summer we'll be keeping it casual this time and mostly just hang out, enjoy our drinks and talk all things WordPress and related

photoVienna WordPress Meetup

This time our host is the coding school “CodeFactory”. They invite us to their campus close to Brick 5, http://brick-5.at....

As it is summer we'll be keeping it casual this time and mostly just hang out, enjoy our drinks and talk all things WordPress and related. If you want to meet people that have similar jobs, are bloggers, or are web-enthusiasts join us!

Anyway as a conversation starter we'll have the honour of having Paolo Belcastro. He is a very active member of the WordPress community and, among many other things, one of the founders of WPVienna. He'll tell us how looking for a WordPress Meetup in Vienna in Nov 2012 led to organise WCVIE in 2015, WCEU in Vienna in 2016 and WCEU in Paris in 2017. Don't forget to bring your questions as this won't be a full-frontal talk but basically an AMA session and your unique chance to hear stories right from the very heart of the WordPress Community.

Of course members of other meetups, that deal with webdesign, content management(systems), web-programming, SEO, etc. are also welcome!

Schedule

18:30 | Arrival, Registration
19:00 | Welcome & Introduction
19:15 | AMA Session with  Paolo Belcastro
19:45 | Socialising!
21:00 | Leaving CodeFactory for drinks somewhere closeby

___________________________

P. S. Speakers Wanted!

We are looking for interesting speakers and topics for the next WordPress Meetups in autumn, and winter. Please use our speaker application form if you want to make a presentation (10–30 minutes).

P. P. S.: Austrian WordPress-Community meets at Slack

If You want to talk to others of the Austrian WordPress-Community, join us at Slack:

wordpress.slack.com (worldwide Community): register here – first! Use Your e-mail, which is registered at wordpress.com or wordpress.org
austriawpcommunity.slack.com (Austrian Community)
• dewp.slack.com (german Community)

Information about joining our Slack channel!



Vienna - Austria

Wednesday, August 9 at 6:30 PM

57

https://www.meetup.com/Vienna-WordPress-Meetup/events/239895264/


pluto.models/1.4.0, feed.parser/1.0.0, feed.filter/1.1.1 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)