Radoslav Stankov

http://rstankov.com 16 posts

Learnings from GraphQL Europe

This weekend, I had the pleasure to attend GraphQL-Europe, the first GraphQL conference in Europe.

Here is a recap of most notable things I learned there:

How GraphQL got from zero to wide adoption in one year

GraphQL spec was released less than an year ago. But it feels a lot more mature. This is because it was in production at Facebook for about 5 years before open-sourcing it.

The conference started with "GraphQL: Evolution or Revolution?" a great history lesson by Jonas Helfer.

In Five Years of Client GraphQL Infrastructure, Dan Schafer told us a couple of stories about how features like mutations were invented. There wasn’t much upfront design, they were just solving issue after issue. First in the frontend, then moving to the backend.

This connected very well with the Closing Keynote by Lee Byron. Where he talked about the future of GraphQL.

Those three talks convinced me that, GraphQL is not an accident.

Subscriptions

I haven't paid much attention of GraphQL Subscriptions in the past. they are now officially merged into the GraphQL spec.

In Realtime GraphQL from the Trenches, Taz Singh gave an interesting look into them.

I don't have many reasons to use them, mostly because Product Hunt doesn't need them at this time.

Persisted queries

Another feature I haven’t paid much attention to are persisted queries. Which is a miss on my part.

I had a couple conversations about persisted queries in between talks. Except for their obvious benefits - small requests, analytics and caching. I haven't realized they are also good from a security standpoint. Since if your GraphQL accepts only persisted queries in production, this makes impossible for somebody to open Chrome console and start firing expensive custom queries.

Handling errors

The Panel Discussion was very interesting. One of my major learnings from there - is that there isn't a standard way in GraphQL to handle form errors.

The format, I personally prefer looks something like this:

updateProfile(input: UpdateProfileInput) {  
  node: Profile
  errors: [Error]!
}

(I plan to write a blog post about form errors, someday)

I didn't know in GraphQL, you can return both a response and an error. It turns out that is a good strategy to handle failures in resolvers, which don't break the whole request. You can just return null for a given node and return errors explaining the failure.

{
  "errors": [
    {
      "message": "Service for fetching widget 1 failed",
      "locations": [ { "line": 4, "column": 5 } ],
      "path": [ "widget1" ]
    }
  ],
  "data": {
    "widget1": null,
    "widget2": {
      key: "value",
    }
  }
}

GraphQL in Ruby world

Ruby GraphQL gem, used by github and Shopify, is quite powerful and battle tested.

Netto Farah show a great solution for battling N+1 queries in Ruby project - GraphQL::QueryResolver. Also in the same talk, he gave great tips about better tracking of GraphQL. If you are using GraphQL with Ruby definitely check his slides.

Launching GitHub's Public GraphQL API gave great tips about:

  • handling authorization
  • using GraphQL for backfilling legacy REST endpoints
  • tips about schema design

Conclusion

Overall GraphQL-Europe was great. Kudos to Honeypot and Graphcool for organizing such great event.

p.s. I almost forgot! I finally found out what OData is :P.

Introducing SearchObject GraphQL Plugin

When I started using GraphQL, I immediately saw, that SearchObject would be a perfect fit for search resolvers.

Having a GraphQL query to fetch the first 10 most recent published news posts would look something like this:

query {  
  posts(first: 10, categoryName: "News", order: "RECENT", published: true) {    
    id  
    title  
    body  
    author {      
       id
       name
    }
  }
}

And it would have a corresponding SearchObject:

class Resolvers::PostSearch  
  include SearchObject.module  

  scope { Post.all }    

  option :categoryName, with: :apply_category_name_filter
  option :published, with: :apply_published_filter
  option :order, enum: %i(RECENT VIEWS LIKES)  

  # ... definitions of the option methods
end  

So clean. ☀️

But then, PostSearch have to be connected with GraphQL Ruby gem:

PostOrderEnum = GraphQL::EnumType.define do  
  name 'PostOrder'

  value 'RECENT'
  value 'VIEWS'
  value 'LIKES'
end  

Types::QueryType = GraphQL::ObjectType.define do  
  name 'Query'

  field :posts, types[Types::PostType] do
    argument :categoryName, types.String  
    argument :published, types.Boolean  
    argument :order, PostOrderEnum
    resolve ->(_obj, args, _ctx) { Resolvers::PostSearch.results(filters: args.to_h) } 
  end
end  

That isn't so bad. 🤔

But then, thinking about how can this code can change in the future:

  • adding/removing options would involve going to both files
  • adding a new order option would mean searching for the PostOrderEnum and manually sync it with PostSearch enum
  • reusing PostSearch in other types, for queries like: query { user(id: 1) { posts(published: true) } }
    • requires copy and paste argument/type definitions
    • which makes updating the resolver even harder

Yikes! 😤 😷

This is where SearchObject::Plugin::GraphQL comes in. It puts type definitions and the resolver itself:

class Resolvers::PostSearch  
  # include the the plugin
  include SearchObject.module(:graphql)

  # documentation and type about this resolver
  # can be provided into the resolver itself
  type types[Types::PostType]
  description 'Lists posts'

  # enums or other types can also be nested
  OrderEnum = GraphQL::EnumType.define do
    name 'PostOrder'

    value 'RECENT'
    value 'VIEWS'
    value 'LIKES'
  end

  scope { Post.all }    

  # options just need to have a their type specifed.
  option :categoryName, type: types.String, with: :apply_category_name_filter
  option :published, type: types.Boolean, with: :apply_published_filter  
  # enums are automatically handled
  option :order, type: OrderEnum

  # ... definitions of the option methods
end  

Then PostSearch can be used just as GraphQL::Function:

Types::QueryType = GraphQL::ObjectType.define do  
  name 'Query'

  field :posts, function: Resolvers::PostSearch
end  

Now, changing filter options requires changing only a single file. PostSearch can be reused in other types, by just adding function: Resolvers::PostSearch.

For more information check SearchObject::Plugin::GraphQL example.

Introducing KittyEvents

During the Christmas break me and Mike were discussing a new feature at Product Hunt. The feature required scheduling an ActiveJobs when a user signs up, votes or submits a comment.

There is SignUp object which handles user registration. So scheduling a new background job there is quite simple:

module SignUp  
  # ... handle user sign up

  def after_sign_up(user)
    WelcomeEmailWorker.perform_later(user)
    WelcomeTweetWorker.perform_later(user)
    SyncProfileImageWorker.perform_later(user)
    NewFancyFeatureWorker.perform_later(user) # <- new worker
  end
end  

Unfortunately after_sign_up method was becoming quite large ☹️
Now imagine having to add NewFancyFeatureWorker to 10 other places 😫

Those issues pushed us to create a simple wrapper around ActiveJobs, which we call KittyEvents.

Now in SignUp there is just one trigger for an "event":

module SignUp  
  # ... handle user sign up

  def after_sign_up(user)
    ApplicationEvents.trigger(:user_signup, user)
  end
end  

And there is a central place, where events are mapped to ActiveJobs Workers:

# config/initializers/application_events.rb
module ApplicationEvents  
  extend KittyEvents

  event :user_signup, [
    WelcomeEmailWorker,
    WelcomeTweetWorker,
    SyncProfileImageWorker,
    NewFancyFeatureWorker, # <- new worker
  ]

  # ... other events
end  

When an event is triggered, all ActiveJobs Workers for this events are scheduled and executed.

Another bonus, is when using KittyEvents, you only make a single Redis call to trigger any number of events. This shaves off precious milliseconds when using KittyEvents in request.

Feature flags in React

A month ago I gave a talk at js.talks() conference about React at Product Hunt.

One of the sections, which didn't make it in the talk, was how feature flags are handled in Product Hunt.

Almost every feature in Product Hunt starts with a feature flag in Flipper. The feature is only available for selected group of users. Initially only for the developers working on it. This allows for splitting big feature into smaller deployable chunks. This eliminates large list of problems and allows for very early feedback on features.

The usual feature timeline looks something like:

After a feature is completed and had run without issues for some time the feature flag is removed.

Working in such way, also helps with code structure. Since all features are isolated.

Usage

In the backend, there is a facade for Flipper:

Features.enabled?('unicorns', current_user)  

In the frontend, feature flags are stored in Redux reducer and exposed via the following utilities:

// `LinkToUnicorns` would be shown, only when user have access to unicorns feature
<EnabledFeature name="unicorns">  
   <LinkToUnicorns />
</EnabledFeature>  
// depending on user permission UnicornsPage or PageNotFound would be rendered
const UnicornsBranch = createFeatureFlaggedContainer({  
  featureFlag: 'unicorns',
  enabledComponent: UnicornsPage,
  disabledComponent: PageNotFound,
});

// ...
 <Route path="/unicorns" component={UnicornsBranch} />
// ...

Sample implementation

Here is a sample Redux implementations of those components:

// This is quite simple reducer, containing only an array of features.
// You can attach this data to a `currentUser` or similar reducer.

// `BOOTSTAP` is global action, which contains the initial data for a page
// Features access usually don't change during user usage of a page
const BOOTSTAP = 'features/receive';

export default featuresReducer(state, { type, payload }) {  
  if (type === BOOTSTAP) {
    return payload.features || [];
  }

  return state || [];
}

export function isFeatureEnabled(features, featureName) {  
  return features.indexOf(featureName) !== -1;
}
// This is your main reducer.js file
import { combineReducers } from 'redux';

export features, { isFeatureEnabled as isFeatureEnabledSelector } from './features';  
// ...other reducers

export default combineReducers({  
  features,
  // ...other reducers
});

// This is the important part, access to `features` reducer should only happens via this selector.
// Then you can always change where/how the features are stored.
export isFeatureEnabled({ features }, featureName) {  
  return isFeatureEnabledSelector(features, featureName);
}

Here are the components implementations:

import { connect } from 'react-redux';  
import { isFeatureEnabled } from './reducers'

function EnabledFeature({ isEnabled, children }) {  
  if (isEnabled) {
    return children;
  }

  return null;
}

export default connect((store, { name }) => { isEnabled: isFeatureEnabled(store, name) })(EnabledFeature);  
import { isFeatureEnabled } from './reducers'

export default function createFeatureFlaggedContainer({ featureName, enabledComponent, disabledComponent }) {  
  function FeatureFlaggedContainer({ isEnabled, ...props }) {
    const Component = isEnabled ? enabledComponent : disabledComponent;

    if (Component) {
      return <Component ..props />;
    }

    // `disabledComponent` is optional property
    return null;
  }

  // Having `displayName` is very usefull for debuging.
  FeatureFlaggedContainer.displayName = `FeatureFlaggedContainer(${ featureName })`;

  return connect((store) => { isEnabled: isFeatureEnabled(store, featureName) })(FeatureFlaggedContainer);
}

(code in gist)

Handling paths in React application

When using React Router, "a" components should be replaced by Link:

<Link to="/about">About</Link>  
<Link to={`/${post.categorySlug}/${post.slug}`}>{post.name}</Link>  

Passing a route to Link as string works, but doesn't protect us from mistyping. Also changing routes is not very easy. For example if we decide that /${post.categorySlug}/${post.slug} should become /posts/${post.slug}, there would be a lot of grepping.

Ruby on Rails solves those problems by generating a method for every route your application.

This concept works great with React Router:

<Link to={paths.about()}>About</Link>  
<Link to={paths.post(post)}>{post.name}</Link>  

All you have to do is define all your routes in file:

// routes.js
export default {  
   post(post) {
     return `/${post.categorySlug}/${post.slug}`;
   },

   about() {
    return '/about';
   },

   // not all routes are strings
   contacts() {
     return { pathname: 'contacts', state: { modal: true } }; 
   },

   // helper for image sources
   image(path) {
     return `https://product-hunt-cdn.com/images/${path}`
   }

   // ....
};

This technique works great with flow, giving you type safety in your links.

I have thought several times about generating this file from router component. But didn't have a chance to do so and with React Router v4 this won't be very easy.