XING Devblog

Android Smartphone Test Farm

Mobile is becoming increasingly important for companies that build web applications, and that also includes XING. Over 50% of our platform traffic comes from mobile devices. This in turn leads to a constant increase in the complexity and amount of testing work required on mobile devices.

Our challenge

At the beginning of 2015 XING launched a new internal initiative called “Unleashing Mobile”. The idea behind it is to upscale mobile development from a single mobile team to multiple teams within the company. The previous team setup was simply not able to keep pace with the development speed of the web platform and bring more and more features to the Android, iOS and Windows Phone mobile platforms. As things stand, we have 5 mobile feature teams developing features like profile, jobs, content or messages. Besides that, each platform has a central core team divided up into a platform and framework sub-team. The core platform team works on features that haven’t yet been passed on to the domain teams. As well as building its own app features, the core team has adopted more of a consulting role in helping to keep the whole app consistent and clean. Another key task of the central core teams is to integrate all of the code changes every two weeks to make sure that a stable app version can be released to our users.

The framework team works at an architectural level by providing other developers and software test engineers with the tools they need to carry out their duties, and by executing the bi-weekly releases.

The unleashing initiative also has a major impact when it comes to testing. Every team needs to have its own test devices or borrow some from the core teams. Handing out devices to the teams is quite hard as a lot of devices are needed and have to be maintained. Moreover, the teams are split across several locations, which makes it difficult to lend and borrow devices. The next aspect is the fact that there are so many different mobile devices on the market (especially for Android), which makes it almost impossible to buy them all for every team. By now we have more than 40 different Android devices at XING that are representative of the devices most used by our users. This number is growing on a monthly basis as we buy new phones and tablets that show up in our usage statistics.

XING Android Devices


To simplify this process and to ensure smooth coordination between all of the teams, we took advantage of our internal HackWeek to research the Smartphone Test Farm for Android based on
→ Read more…

Posted by Jan Ahrens

Filed under API

How to write an OAuth implementation

This is the second part of a two part series on our own OAuth implementation. Read the first part to find out why we decided to write an own implementation.

After we decided that it’s best to write our own OAuth implementation, we started with some planning. We identified three properties, that our new implementation should fulfil.

  1. Compatibility with the “oauth” gem
  2. Run-time efficiency
  3. Fixing the bug

Our biggest fear was that certain requests that were previously accepted would get rejected by our new implementation. We thought that we might get a little detail in the implementation wrong or that the gem allowed certain edge cases we didn’t know about.

Fail fast and without impact

We chose to run the new implementation in parallel with the current one and use our live API traffic to figure out how good the new implementation was. To do this, we kept the “oauth” gem in charge of deciding whether the request was valid. After the gem version did their judgement, we called our own implementation and compared the results.

As we didn’t know how efficient our own implementation would be, we decided to only test a very small percentage of the overall traffic against it. For limiting the traffic, we used a very simple method – the Kernel#rand method. Of course we knew that this isn’t a very exact way to get a fixed percentage of the traffic, but it proved to be a good estimate and was at the same time easy and fast to implement.

check_new_oauth_implementation(old_oauth) if rand < 0.01

We setup statsd buckets to measure the results of our comparison. To display the data and track our progress, we configured graphs in Graphite.

first-attemptWe deployed our first and very naive version, after we put everything in place. The Graphite graph validated our assumption: our implementation was naive. The majority of requests, that the new implementation processed, were wrong. The first orange line (“r1”) in the graph shows that.

Now that we knew that the implementation was wrong, we wanted to find out what it did wrong. The graph didn’t show the causes of the errors. That’s why we wanted to log differences in the Signature Base String.

We chose Redis to store the data, because it’s fast, it was really easy to implement a ring buffer and we were already using Redis to store other data. To implement the ring buffer, we only had to combine the lpush and ltrim commands.

def check_new_oauth_implementation(old_oauth)
  # ...

  if old_oauth.result != new_oauth.result
    log_data = {
                 old: old_oauth.basestring,
                 new: new_oauth.basestring
    # redis manages an established "hiredis" connection
    redis.lpush('oauth_basestring_mismatch', log_data)
    redis.ltrim('oauth_basestring_mismatch', 0, 99)

With the help of this log we were able to spot some of the mistakes that the implementation made and deployed the next version (“r2” line). Unfortunately this version made things worse. The graph clearly shows this. Whoops! We decided to call it a day, and to continue tomorrow.

fixing-bugs-increasing-trafficThe next day we fixed the stupid bug (“r3”), that we made and continued to monitor the contents of the Redis-based ring buffer. You can see the progress from that day in the next picture. With release “r4” and “r5” we fixed more implementation differences.

Once the “valid” line was matching the “total” line and the “invalid” line got flat, we were sure that our implementation was good enough.

Is it efficient?

As a next step, we wanted to find out how efficient our implementation was.

We increased the floating-point number at our “rand” switch, to test a higher percentage of our traffic and started to monitor the performance implications in Logjam.

Logjam is the tool that we use at XING to find performance hot spots and errors in our services. It’s similar to New Relic with the added benefit of being Open-Source and hack-able. Logjam is developed and maintained by Stefan Kaes, who’s a XING colleague.

It utilizes the time_bandits gem to measure in which parts of our Rails code a request spends which time.

For the XING API we implemented a custom time_bandits consumer that was responsible to measure the time spent to verify OAuth signatures. We closely monitored this time, as we gradually increased the number of requests that the new implementation was receiving (“r6” and “r7” line).

Fortunately there was no serious performance impact. The next day, we were confident enough, to check 100% of our requests against both OAuth implementations.

Yay, bugs!

Even with 100% of the API traffic, the graphs showed no spikes in errors. At first we were really happy about this, but soon we became skeptical. There had to be some edge-cases that we missed so far, we thought. As we once again had a look at contents of our ring-buffer log, we knew that our gut feeling was right.

The Signature Base String comparison showed that some consumers were duplicating OAuth parameters. The same oauth parameter was sent in the Form-Encoded Body and in the Request URI Query.

POST /v1/users/me/converstations?oauth_consumer_key=barbaz
Content-Type: application/x-www-form-urlencoded


So far we assumed that parameters in the body and in the query part of the URI get combined for the Signature Base String calculation. But how exactly? We thought that they are unique, so that duplicating them has no visible effect in the Signature Base String.

We were wrong. As it turned out they both have to be included in the Signature Base String. Those where the kind of implementation details we were afraid of.





Being really sure

We continued to run the old and new implementation in parallel for several weeks. During this time we found some even more remote edge cases. We had to consult the RFC a few times, and some of these resulted in further refinements, but we had done enough to wipe away our doubts. We decided to put the new implementation in charge of checking the authenticity of all API requests.

Posted by Jan Ahrens

Filed under API

Why we wrote our own OAuth 1.0 implementation

The XING API is running on OAuth 1.0. It’s a protocol that helps our users to give access to parts of their data and actions to 3rd party apps. It’s core feature is that the users don’t need to give their password to the 3rd party app. Most likely you know that process from other platforms, too.

Before continuing, we need to clarify an important detail. You might be reading this and ask yourself: “Wait .. 1.0 … why are they still using OAuth 1.0? Isn’t OAuth 2.0 out for a very long time?”

We also asked ourselves that question and we want to briefly share our view. To begin with, it works for us. Version 1.0 is simple. Admittedly it also has its flaws like calculating complicated signatures and we’ll talk about this later. Apart from that it’s a well matured protocol and fits our needs perfectly. To be honest, so far we didn’t need any of the OAuth 2.0 features and that’s why we’re still sticking with version 1. Another downside is, that its second version got way more complicated. Did you know that Eran Hammer, lead author and editor of OAuth 2.0, does not want to be associated with the OAuth 2.0 standard any longer.

At XING we use Ruby to power our API. The natural choice, when we started to develop it, was to not reinvent the wheel and use the well written oauth gem. Some months ago we decided to revise this decision and decided to write our own OAuth 1.0 implementation. In this article I want to explain why we did this.

Debugging OAuth

Recently we received a bug report from one of our consumers. He was trying to send an HTTP PUT request. It was a signed request, using HMAC-SHA1, one of the signature methods supported by the OAuth protocol.

The request failed and the consumer managed to convince us that their code was bug free.

It was clear that the request failed, because the OAuth signature did not match. To understand the problem, it’s good to get a little bit of background about those signatures. When using OAuth, every HTTP request is signed to authenticate it. The signing prevents clients from sending the secrets (to be specific, the Consumer Secret and Access Token Secret) along with the request. In the end they are named “secrets” for a reason, right? A signature base string gets calculated over various request parameters, that then gets cryptographically signed with the secrets. On the server side, this process is repeated and the signatures are compared. If they match, the request is authenticated, if not it gets rejected.

To find out what was going on, we reproduced the consumers request and looked at our APIs behavior with a debugger. We managed to trace the cause of the problem to a statement in the oauth gem.

The consumer was sending an HTTP PUT request with a body. This body contained x-www-url-formencoded encoded data, that, by definition, has to be included into the signature base string calculation. The oauth gem, however, does not look at bodies of HTTP PUT requests. Clearly a bug.

How to fix it?

We knew that the error was happening inside the oauth gem and we knew how to fix it. “Why not send a pull request?”, we thought.

As is turned out, the last gem release was in 2012. The project on GitHub also looks no longer maintained, as the last pull-request got merged in December 2014. So sending a pull-request wasn’t an option.

The next thing we thought about, was forking the gem, fixing the bug, and publishing the forked gem. We tried to do this but soon gave up because the dependencies where difficult to setup.

As a next step, we tried to find other gems that implemented OAuth 1.0. We found simple_oauth and it looked very promising. It basically consists of a single Ruby class that implements the verification part of the protocol in 133 lines of code.

Unfortunately we couldn’t use that one either, because it focused on reading OAuth parameters from the Authorization header. We tried to work around that, but soon gave up, as it got too complicated.

Inspired from the simplicity of the “simple_oauth” gem, we decided to write our own implementation.

In the next blog post we’ll explain how we ensured that the new implementation was correct and efficient.


At the beginning of this month, the beautiful city of Gent played host to the 9th edition of ArrrrCamp. Originally more of an impromptu meetup, the event has evolved over the years to become a Ruby-focused development conference. Over the course of two days, we were presented a carefully curated program of regular talks, keynotes and lightning talks.
→ Read more…

Posted by

Filed under Agile

Co-active coaching meets Agile

Author: Alexey Krivitsky

On the last Friday of the summer we ran the 5th meet-up of the Agile Coaching Circle community on the topic of Co-Active Coaching in the Agile Context. Again it was warmly hosted by XING in Hamburg.

Meet-up: Co-Active coaching meets Agile

Meet-up: Co-Active coaching meets Agile

The four previous gatherings were dedicated to the agile topics like: advanced retrospectives (with Marc Löffler), agile coaching culture at Spotify (with Joakim Sundén), #NoEstimates (with Vasco Duarte) – we’ve had a lot of nice agile-focused discussions over the year.

But since the goal of our community is to deepen not only the agile skills but also coaching, this time we decided to dive and see what a pure coaching experience might look like.

We were very lucky to get Hannes Entz von Zerssen (an executive coach and trainer) join us and enlighten us on the mindset and toolbox of professional coaches. And especially the Co-Active Coaching model that Hannes is a seasoned teacher of.

Coaching in action

Demoing a coaching sessions

Coaching power

Explaining how coaching works

After the coaching demo with one of the volunteers we were asked to practice on each other: namely to find a partner and practice coaching dialogs for 20 minutes or so. Despite the fact that coaching was new to some of us and we had just this much time, a number of people confirmed they have gotten interesting insights on the actual problems and found new ways of dealing with them. Coaching is definitely a powerful tool we all can use to help each other on daily basis.

But what about agile coaching?

Later we’ve discussed several burning questions from the audience, one of which was about applying pure coaching view in the context of agile coaching. In fact that is a hot topic. In the context of 1:1 coaching, you (coach) and your client (coachee) have a working agreement (rapport) and the coachee has a coaching request (s)he comes up with. Having the permission to coach, the coach has the right to challenge the client, push for responsible actions, explore the unpleasant unknowns and what-ifs… On the contrary: on the battle field of agile coaching, we (coaches) and our teams (employees of the client) in general are not bound with such coaching agreements. What makes it complicated is that the people who order our services (usually the top management) sends us to fix the situation down there. That makes agile coaching challenging and not always as efficient as it could be: people who we tend to coach (e.g. the agile teams) haven’t requested coaching and also never given permissions to be coached… Some people on the meet-up said that the teams they are working with get constantly annoyed with them asking so many (stupid) questions…

My personal insight from that discussion is that – we (agile coaches), before engaging with the teams, have to explain where we are coming from (our coaching mindset) and hence help the folks start being comfortable with our tooling (the powerful questions). Only then can we start building the rapport and engaging in reach coaching dialogs.

That was enlightening!

It was also refreshing to know that in the Co-Active Coaching, “designing the alliance” is a mandatory part of the model. It is when a coach together with the coachee spends time defining the process, communication protocols and in general agree how to be/interact with each other.

We, working in the context of agile coaching, need to think hard how to include this alliance-designing processes into our coaching.

Despite the fact that coaching is a sort of magic, it can be taught and learned. Know more about the Co-Active Coaching (also in English) or contact Andreas Kömmling who will help you learn more about it.

Interested to attend our next community meet-ups?
Join our group Agile Coaching Circle.

Posted by

Filed under Agile Mobile

Scaling Mobile at XING: Platform, Framework and Domain Teams

Author: Alexey Krivitsky

Here at XING we’ve been having quite an exciting challenge.

Learning how to scale mobile development in a way so that as many teams as needed could contribute to development of mobile apps (on both iOS and Android platforms) and at the same time keep the apps consistent, stable and shiny… That’s not a thing you can do by a book (is there actually a book on this subject?).

So we had to come up with our way of solving the complex matter.

This paper is envisioned as a quick-read guide of our last years’ journey. It summarizes all key decisions and structural changes we made in order to enable the scale on mobile from 2 to 10 teams.

If your company’s next challenge is to take mobile development to the next level, you might find some of the ideas listed below worth trying.

Read the full article on InfoQ >>>

Posted by

Filed under Everything else Stuff XING at conferences

XING @ So Coded Conference

It’s not that often you hear the term tech conference and church in the same sentence. Well, in this case, it happened to be true. Last week XING was the main sponsor of the So Coded  conference that took place at Kulturkirche Altona in Hamburg. 

When I walked into the church on Thursday morning, I experienced a mixed feeling of being at a techie gathering yet also feeling relaxed due to the location and the way it had been decorated, which is not what you’d expect from most tech events. Just to clarify, this location is used for events like So Coded and not just for religious ceremonies!

Unlike most of the other conferences, this one started at noon, meaning that there was plenty of time to network while enjoying a delicious breakfast. The opportunity to network with people even before the event kicked off gave everyone a chance to get to know the other attendees and the location, and this was something which I particularly liked. 

Conference opening

Photo wall

During the hack night

The talks themselves included a good mix of tech and non-tech topics well distributed over the two days, starting with Hannah Schickedanz sharing the changes in her life as a result of doing the things that she loved, such as travelling around New Zealand with her family in an old school bus, and on to Rachel Myers talked about about why a service-oriented architecture is not the holy grail!

As a company, we’re happy to support events like So Coded as they also give us the opportunity to put something back into the community. It was a team of engineers and engineering leads who had the chance to connect with a lot of people at So Coded who came from various places around the world to share the XING and the Hamburg Love. However, for the next So Coded, it would be really cool to have a few more participants. It was a pleasure meeting everyone there, and let’s not forget all the photos we took with the Polaroid. In fact we built a photo wall as a way of saving the memories.

At the end of the day, we all have a reason why we go to work, and here at XING we’re “committing for a better working life”.

Until next time, 


Posted by Alexey Krivitsky

Filed under Agile

Sprint Review Turned Into A Beer Fest

Today our sprint review was different. It didn’t look like any meeting at all. If you would enter the team areas you wouldn’t see anything drastically different to our normal working mode: people are talking in small groups, several people are discussing something by looking at their mobile phones, several others are leaning over tables and looking onto screens. Nothing like a meeting.


Sprint Review: API stand.
People are trying new API calls


Sprint Review: iOS stand.
Folks are trying new feature being built

But two weeks ago and also every two weeks for the last year or so, this event looked pretty much different. We all would sit in one of our large meeting rooms with 40-50 people attending. One person would speak on a stage for about 7 minutes and others would watch the presentation. Then someone maybe ask one or two questions. Then people would applause and switch to the next presenter.

This was our “standard” way we used to run joint sprint reviews for the 3 teams in our XING mobile cluster. Then since our review was well attended, we invited several other teams to join. In the end we began to have one of the biggest sprint reviews at XING. Huge success! Really?

But despite the high attendance of the guests including the C-level executives, the feedback we started to get from the teams was rather on the negative side: the reviews were taking too much time to prepare; it was stressful to speak in front of so many people; not all of the presentations were uniquely valuable and engaging… and so on and so forth. But the main negative signal we heard was rather strong: the teams were receiving close to zero feedback (applause won’t count)

That was a refreshening “aha” moment for me (an Agile coach) when I came to realize the fact. Especially because I was one of the people who had stood behind creation of the joint review format. As a quick history tour: two years ago when I joined XING each team had a dedicated Sprint Review. But the teams were not receiving equal attention from the stakeholders and management. Also it was quite challenging for the stakeholders to attend all teams’ review. So we decided then to merge the reviews. What we ended up having wa a gigantic Sprint Review that was well-attended by stakeholders and the top company people.

But somewhere on the way apparently we’ve lost an important piece: feedback on the product. That was definitely a learning path.

But why feedback is so important? Well. Scrum is based on the mantra “inspect-and-adapt”. And that’s for a reason. Software development is a complex process that requires empirical process control. Which in simple words mean: one can’t just simply plan software development and then go and execute the plan (that would be a defined process). Instead you need to go a small distance, then stop for a moment, look around and based on what you see decide how you adjust your route. That’s empirical.

And that’s what the Scrum teams are supposed to do when running Sprint Review (and also Sprint Retrospectives) in Scrum: stop, inspect what they did and decide how to adapt it in order to improve.

Now back to our situation. We did the stops. But we didn’t get enough of useful insights on where to go next. So why to have Sprint Review at all? In the end, I’m sure that question was on many people’s minds…

So what we did was following the advice from the Large Scale Scrum practitioners: do the Sprint Review Bazaar (we, being a German company, like the “Beer Fest” metaphor better).

So that what we did:

  1. Asked each teams to have a booth where they will welcome guests (a desk with some candies is good enough).
  2. Make sure the teams have at least 2 people at the same time at the booth talking to the guests.
  3. Cross the fingers and let the folks self-organize.

Sprint Review: Android stand

Sprint Review: Android stand.
Guests are learning about the Android mechanism of background jobs.

Sprint Review: Mobile Infrastructure

Sprint Review: Mobile Infrastructure.
People can see how the improved push notification system behaves.

Since our main goal of this particular Sprint Review was to run an experiment and see whether in general such an approach can be applied, we collected just-in-time feedback from out teams. And what I can tell you – so far it looks promising! People were saying things like:

more interaction
easy to try things out
you can go into more details
you can ask now much more questions
cozy and intensive conversations
less preparations needed

There are of course things to improve, so more experiments to come. Inspect and adapt, right?

But what has become obvious is that the “Beer Fest” style of Sprint Reviews is the way to go. At least for the next few sprints until we find an even better idea.


Posted by Renzo Crisóstomo

Filed under iOS Mobile

iOS Culture @ XING

A couple of weeks ago our colleagues from the iOS team in Barcelona came to visit us. It’s fairly uncommon for all the iOS developers to be together in one place so we decided to hold an offsite meeting with the aim of creating a distributed iOS culture at XING and building a strong relationship between all of the developers. Here’s the story about how it went.

The location? A castle near Bremen with everything you could imagine, ranging from a bar to a big hall complete with fireplace! We arrived early in the morning, so right after unpacking we had breakfast and started with some team building activities outside. It was pretty cold as you can probably tell from the photo!

castle people_outside

After this, everyone met inside again for tech talks, which is something we do every week. As you may know, we launched recently XING 5.0 on iOS, a universal app delivering the best of XING information patterns on both iPhone and iPad. This was a huge team effort involving nearly every single iOS developer. But now we’re organised in different teams, with each one of them focussing on a different XING product or feature, so there’s always interesting stuff to share!

tech_talks happy_people_eating

It was almost noon and we were getting pretty hungry, so we started our next activity: social cooking. We divided ourselves up into small teams with the aim of cooking any dish we like, but there was a catch! We had to present the dish using a haiku. The results were just fantastic, as you can see in the photo!

Now it was almost 3:00 pm and we hadn’t coded anything all day, so people were starting to get anxious. For this reason we moved to another location for our next activity: a blazing-fast hackathon. The rules? 2 hours when you should code any kind of game you like using Swift. At first, everyone thought it was just insane, but we ended up having an awesome time. You can find some of our projects on GitHub, but don’t expect polished results as we only had 2 hours to work on things!

people_coding people_at_bar

After our coding activity, we went back to the castle and enjoyed the rest of day with a BBQ outside, including drinks and games at the bar and melting marshmallows by the fireplace!

Right after breakfast the next day, we started our next activity: a workshop where we ran through different activities with the aim of defining our team values and working principles, which was also designed to help us understand and promote the way we work. It was amazing that we really were on sync with everything already, so at some point we got asked to think outside the box to find ways to promote our work, which resulted in some really interesting ideas!

outside_the_box workshop

After the workshop, we continued with tech talks over lunch. Once that was over, our Mobile Engineering Director, Alexander Greim, arrived in order to assist us with the next and final activity. Based on XING’s openness as an organisation, the next activity was designed to answer our questions and provide feedback about a recently created role in our team: Lead Developer iOS!

This nicely rounds off my brief story about our offsite meeting. In summary I can say that we achieved our aim of creating a distributed iOS culture that now helps us to uphold consistency in terms of our team decisions, and we also built a strong relationship that improves our daily work. Did you like what you read here? Visit our website for more information about our team and current vacancies. See you next time!