Who We Are

We are Optimum BH - A cutting edge software development agency specializing in Full-stack development with a focus on web and mobile applications built on top of PETAL stack.

What We Do

At Optimum BH, we are dedicated to pushing the boundaries of software development, delivering solutions that empower businesses to thrive in the digital landscape.

Web app development

We create dynamic and user-friendly web applications tailored to meet your specific needs and objectives.

Mobile app development

We design and develop mobile applications that captivate users, delivering an unparalleled experience across iOS and Android platforms.

Maintenance and support

Our commitment doesn't end with deployment. We provide ongoing maintenance and support to ensure your applications remain up-to-date, secure, and optimized for peak performance.

Blog Articles

Leveraging FLAME for Efficient Screenshot Generation

A successful knowledge-sharing platform, like Elixir Drops, depends on making its content both discoverable and easily shareable. Metatags are essential for achieving this by providing search engines, social media platforms, and browsers with critical metadata to understand, display, and rank content appropriately. Among these, metatag images—visuals that appear when a link is shared on platforms like Facebook, Twitter, or Slack—play a key role in engaging users. These images, specified via Open Graph (og:image) and Twitter Card (twitter:image) tags, significantly enhance the appearance and click-through potential of shared links. For platforms centered on code sharing, developers typically use tools to style and format code snippets into visually appealing screenshots for social media. However, manually creating such images for a growing platform like Elixir Drops is not scalable. To address this, we automated the creation of custom metatag images using Elixir tools and the FLAME library for dynamic scaling. Automating Metatag Image Creation 1.Triggering a Background Job Every time a post is created or updated, a background job is queued to generate a screenshot of the post content. This image is uploaded to an S3 bucket and referenced in the post's metatags. def handle_event("save", %{"drop" => drop_params}, socket) do case create_or_update_drop(socket, socket.assigns.live_action, drop_params) do {:ok, drop} -> enqueue_seo_screenshot_creation(drop.id) { :noreply, push_navigate( socket, to: ~p"/profile" ) } {:error, changeset} -> {:noreply, assign_form(socket, changeset)} end end defp enqueue_seo_screenshot_creation(drop_id) do %{"drop_id" => drop_id} |> ScreenshotGeneratorWorker.new() |> Oban.insert() end 2. Background Job for Screenshot Generation The background job ensures that only posts with code blocks trigger the screenshot generation process. The job scans the Markdown content of the post using a regex. If no code block is found, the job is canceled to avoid unnecessary retries. def perform(%Oban.Job{args: args}) do args |> maybe_create_screenshot() |> maybe_retry_job() end defp maybe_retry_job({:ok, _image_url}), do: :ok defp maybe_retry_job({:cancel, reason}), do: {:cancel, reason} defp maybe_retry_job(error), do: error defp maybe_create_screenshot(args) do with {:ok, drop} <- get_drop(args["drop_id"]), :ok <- check_for_code_block(drop.body) do drop_screenshot(drop) else _error -> {:cancel, "No code block found"} end end defp check_for_code_block(body) do case Regex.run(@markdown_regex, body, capture: :first) do nil -> {:error, "No code block found"} _code_block -> :ok end end 3. Screenshot Generation Screenshots are created using Wallaby, which relies on Chromedriver. The setup dynamically generates browser-based screenshots of the code blocks. defp generate_screenshot(drop) do {:ok, session} = Wallaby.start_session( capabilities: %{ chromeOptions: %{ args: [ "--headless", "--no-sandbox", "window-size=1280,800", "--fullscreen", "--disable-gpu", "--disable-dev-shm-usage" ] } } ) url = build_url_with_auth(drop) %Wallaby.Session{screenshots: [screenshot]} = session |> Browser.visit(url) |> Browser.take_screenshot() Wallaby.end_session(session) {:ok, screenshot} end 4. Uploading to S3 The generated screenshot is uploaded to S3 for persistent storage and later reference in the post's metatags. defp upload_screenshot(screenshot, drop) do timestamp = Timex.to_unix(drop.updated_at) image_name = "drop-meta-image-#{timestamp}-#{drop.id}.png" case Client.upload_image(screenshot, image_name, "image/png") do {:ok, image_url} -> {:ok, image_url} error -> error end end Handling tasks like screenshot generation and S3 uploads can lead to spikes in resource usage, especially with increasing demand. To manage this efficiently, we leveraged FLAME, a library designed for elastic workloads. FLAME allows resource-intensive operations to run on short-lived infrastructure, scaling dynamically to meet demand and scaling down during idle times. FLAME is a distributed, serverless-inspired library and paradigm in Elixir, designed to efficiently manage elastic workloads—tasks with highly variable resource demands. It enables developers to treat their entire application as a lambda, allowing modular components to execute on short-lived infrastructure without requiring rewrites or complex orchestration.- Docs Using FLAME to Handle Screenshot Generation FLAME provides a powerful way to manage elastic workloads. Here are the steps to integrate FLAME for dynamically generating screenshots for your posts. 1. Add FLAME dependency # mix.exs: {:flame, "~> 0.5.1"}, 2. Setting up FLAME Inspired by a great example. We configure FLAME to enable or disable services depending on whether a node is running as a FLAME child. You can learn more about this setup from the Deployment Considerations and Pools documentation. Here’s how we configured application.exs: # application.exs @impl Application def start(_type, _args) do children = children( always: ElixirDropsWeb.Telemetry, always: ElixirDropsWeb.Endpoint, parent: ElixirDrops.Repo, parent: {DNSCluster, query: Application.get_env(:elixir_drops, :dns_cluster_query) || :ignore}, parent: {Phoenix.PubSub, name: ElixirDrops.PubSub}, # Start the Finch HTTP client for sending emails parent: {Finch, name: ElixirDrops.Finch}, # Start a worker by calling: ElixirDrops.Worker.start_link(arg) # {ElixirDrops.Worker, arg}, # Start to serve requests, typically the last entry parent: {FLAME.Pool, name: ElixirDrops.ScreenshotGenerator, idle_shutdown_after: 30_000, log: :info, max_concurrency: 2, max: 4, min: 0}, parent: {Oban, Application.get_env(:elixir_drops, Oban)} ) # See https://hexdocs.pm/elixir/Supervisor.html # for other strategies and supported options opts = [strategy: :one_for_one, name: ElixirDrops.Supervisor] Supervisor.start_link(children, opts) end # Tell Phoenix to update the endpoint configuration # whenever the application is updated. @impl Application def config_change(changed, _new, removed) do ElixirDropsWeb.Endpoint.config_change(changed, removed) :ok end # Exclude children marked with `parent` in the FLAME environment defp children(child_specs) do is_parent? = is_nil(FLAME.Parent.get()) is_flame? = !is_parent? || FLAME.Backend.impl() == FLAME.LocalBackend Enum.flat_map(child_specs, fn {:always, spec} -> [spec] {:parent, spec} when is_parent? == true -> [spec] {:parent, _spec} when is_parent? == false -> [] {:flame, spec} when is_flame? == true -> [spec] {:flame, _spec} when is_flame? == false -> [] end) end 3. Configuring the Fly Backend Since we're using Fly.io machines, we need to configure the FlyBackend and set the environment variables for FLAME. Note that Fly.io machines running FLAME tasks do not inherit the parent’s environment variables. Add the following configuration in config/runtime.exs: # config/runtime.exs config :flame, backend: FLAME.FlyBackend, env: %{ "AWS_ACCESS_KEY_ID" => aws_access_key_id, "AWS_ENDPOINT_URL_S3" => aws_endpoint_url, "AWS_REGION" => aws_region, "AWS_SECRET_ACCESS_KEY" => aws_secret_access_key, "BUCKET_NAME" => aws_bucket }, token: fly_api_token 4. Wrapping Screenshot Generation in FLAME Once FLAME is set up, we can wrap the screenshot generation logic in a FLAME call to execute the task on short-lived infrastructure. Here’s how you can wrap the screenshot generation and S3 upload logic: defp drop_screenshot(drop) do FLAME.call(ScreenshotGenerator, fn -> with {:ok, screenshot} <- generate_screenshot(drop), {:ok, image} <- File.read(screenshot) do upload_screenshot(image, drop) end end) end By integrating FLAME into our workflow, we have streamlined the process of generating and uploading screenshots for our posts, allowing us to handle elastic workloads efficiently without introducing unnecessary complexity. FLAME’s ability to dynamically scale resources on-demand ensures that we can handle spikes in demand—such as when processing multiple screenshot requests—while maintaining cost-effectiveness and simplicity. This approach not only simplifies our infrastructure but also enables us to focus on providing a better experience for users without worrying about the complexities of serverless architectures. Whether you're handling resource-intensive tasks like screenshot generation or other elastic workloads, FLAME offers a robust and scalable solution for modern web applications. You can learn more about FLAME: Official Docs Rethinking Serverless with FLAME Scaling Your Phoenix App in Elixir with FLAME Serverless With Servers? FLAME is...weird
Nyakio Muriuki

Getting Started with Ash Framework in Elixir

Are you looking for a powerful and flexible way to build Elixir applications? Look no further than the Ash framework! In this blog post, we'll introduce you to Ash, explain why it's great for building applications, and show you how to get started. What is Ash Framework? Ash is a declarative, resource-based framework for building Elixir applications. It provides a set of powerful tools and abstractions that make it easier to build complex, data-driven applications while maintaining clean and maintainable code. Why Use Ash Framework? Declarative Design: Ash allows you to define your application's structure and behavior declaratively, making your code more readable and maintainable. Resource-Based Architecture: With Ash, you model your application around resources, which encapsulate data and behavior in a cohesive way. Built-in Features: Ash provides many built-in features like pagination, filtering, and authorization, reducing the amount of boilerplate code you need to write. Extensibility: The framework is highly extensible, allowing you to customize and extend its functionality to fit your specific needs. Integration with Phoenix: Ash integrates seamlessly with Phoenix, making it easy to build web applications with a powerful backend. Installing Ash Framework To get started with Ash in a new or existing Elixir project, you'll need to add the necessary dependencies to your mix.exs file: defp deps do [ {:ash_phoenix, "~> 2.0"}, {:ash, "~> 3.0"}, # ... other dependencies ] end Then, run mix deps.get to install the dependencies. For more detailed installation instructions and configuration options, check out the Ash Installation Guide. Key Features of Ash Framework Let's explore a few key features of Ash that make it powerful for building applications: 1. Ash Resources In Ash, you define your application's data model using resources. Here's an example of a simple Post resource: defmodule AshBlog.Posts.Post do use Ash.Resource, data_layer: AshPostgres.DataLayer, domain: AshBlog.Posts postgres do table "posts" repo AshBlog.Repo end actions do defaults [:read, :destroy, create: :*, update: :*] end attributes do uuid_primary_key :id attribute :title, :string, allow_nil?: false, public?: true attribute :body, :string, allow_nil?: false, public?: true create_timestamp :inserted_at update_timestamp :updated_at end end This resource definition includes attributes, actions, and database configuration. Ash takes care of generating the necessary database schema and provides a high-level API for interacting with your data. 2. Built-in Pagination One of the powerful features of Ash is its built-in support for pagination. Ash comes with both offset and keyset pagination out of the box. With just a few lines of code, you can implement pagination in your application. To setup keyset pagination in your resource, just add this, under actions: # A `:read` action that returns a paginated list of posts, # with a default of 10 posts per page read :list do pagination keyset?: true, default_limit: 10 prepare build(sort: :inserted_at) end And here’s how you can use it in your LiveView: defp list_posts(%{assigns: %{load_more_token: nil}} = socket) do case Posts.read(Post, action: :list, page: [limit: 10]) do {:ok, %{results: posts}} -> load_more_token = List.last(posts) && List.last(posts).__metadata__.keyset socket |> assign(:load_more_token, load_more_token) |> stream(:posts, posts, reset: socket.assigns.load_more_token == nil) {:error, error} -> put_flash(socket, :error, "Error loading posts: #{inspect(error)}") end end defp list_posts(%{assigns: %{load_more_token: load_more_token}} = socket) do case Posts.read(Post, action: :list, page: [after: load_more_token, limit: 10]) do {:ok, %{results: posts}} -> load_more_token = List.last(posts) && List.last(posts).__metadata__.keyset socket |> assign(:load_more_token, load_more_token) |> stream(:posts, posts, at: -1, reset: socket.assigns.load_more_token == nil) {:error, error} -> put_flash(socket, :error, "Error loading posts: #{inspect(error)}") end end This implementation allows for efficient loading of posts as the user scrolls, creating an infinite scrolling behaviour. 3. Ash.Notifier Another powerful feature of Ash is the ability to broadcast changes in resources using Ash.Notifier. This is particularly useful when you want to update the UI in real-time when data changes. Here's an example of how to set up a notifier: defmodule AshBlog.Notifiers do use Ash.Notifier def notify(%{action: %{type: :create}, data: post}) do Phoenix.PubSub.broadcast(AshBlog.PubSub, "post_creation", {:post_created, post}) end end This notifier broadcasts a message whenever a new post is created. Then, add the notifier to your resource: defmodule AshBlog.Posts.Post do use Ash.Resource, data_layer: AshPostgres.DataLayer, domain: AshBlog.Posts, notifiers: [AshBlog.Notifiers] # <-- add this # ...rest of your code end You can then subscribe to these notifications in your Phoenix LiveView to update the UI in real-time. 4. Integration with Phoenix LiveView Ash integrates seamlessly with Phoenix LiveView, allowing you to build reactive user interfaces. Here's an example of how to use Ash with LiveView: defmodule AshBlogWeb.PostLive.Index do use AshBlogWeb, :live_view alias AshBlog.Posts alias AshBlog.Posts.Post @impl Phoenix.LiveView def mount(_params, _session, socket) do if connected?(socket), do: Phoenix.PubSub.subscribe(AshBlog.PubSub, "post_creation") form = Post |> AshPhoenix.Form.for_create(:create) |> to_form() {:ok, socket |> assign(:page_title, "AshBlog Posts") |> assign(:load_more_token, nil) |> assign(:form, form) |> stream(:posts, [])} end # ... (other LiveView callbacks and event handlers) end This LiveView lists posts, handles pagination, and updates in real-time when new posts are created. To see a complete example of an Ash-powered blog application, you can check out this sample AshBlog project on GitHub Conclusion Ash framework provides a powerful and flexible way to build Elixir applications. Its declarative approach, resource-based architecture, built-in features like pagination, and integration with Phoenix make it an excellent choice for building complex, data-driven applications.To learn more about Ash and dive deeper into its features, check out the following resources: Ash Documentation AshPhoenix Documentation Phoenix LiveView Documentation Phoenix Framework Guides Happy coding with Ash framework!
Amos Kibet

Optimum infrastructure generator

In the Elixir DevOps blog post series we wrote about our development workflows and the infrastructure facilitating them. Those are the tools we reach for on most of the projects. Fly.io is our platform of choice, but even when we’re not the ones making that decision, we at least set up the continuous integration the way we described in the Optimum Elixir CI with GitHub Actions. There are many moving pieces involved in the infrastructure setup, which can incur a great cost in terms of developer hours, even if following along our blog post series. As a small business owner, development team lead, or anyone involved in decision-making, you’ll have a tough time justifying money spent on developers reinventing the wheel which is a CI/CD pipeline and other aspects of infrastructure setup versus taking an off-the-shelf solution. We didn’t want to do this manually on all our projects, so we created a generator that simplifies the process greatly. And now we offer that to everyone else, too. We target startups and small businesses that don’t yet require a huge infrastructure (AWS, Google Cloud, Terraform, Kubernetes, custom setup on bare metal, you name it). Even if you’re already on Kubernetes, maybe you should reconsider whether it’s appropriate for your needs and your scale.   Anyway, the generator serves as a glue between different tools we covered in the blog post series: Optimum CI with a revolutionary yet simple caching strategy automatic deployment to the staging server on Fly.io on merge automatic creation of preview apps on Fly.io on PR creation and updates config for the production server on Fly.io   Plus: AppSignal configuration health check mise setup   Whether you’re working on an existing app, or a completely new one, we got you. Both plain Elixir and Phoenix apps are supported with different feature sets.   If you’re working on an Elixir app without Phoenix you get: mise config (.tool-versions file, reading env variables from .env file) local code checks (compilation warnings, Credo, Dialyzer, dependencies audit, Sobelow, Prettier formatting, Elixir formatting, tests, and coverage) CI on GitHub Actions docs release setup   If you’re working on a Phoenix app, on top of that, you get: health checks AppSignal configuration Dockerfile for environments on Fly.io preview, staging, and production environments config for Fly.io CI and CD on GitHub Actions setup for preview apps and staging deployment   We offer all of this at a predictable, streamlined pricing. The regular price is $499, but for a limited time only, you can get it for $299. Buy it once and run it as many times as you want on any project that you want. We support both plain Elixir and Phoenix apps. Running the generator in Elixir apps sets up mise and CI (locally and on GitHub), while for Phoenix apps it additionally set up CD. Visit hex.codecodeship.com/package/optimum_gen_infra to get started.
Almir Sarajčić

Exciting Updates to Phx.tools

In the ever-evolving landscape of software development, it's essential to keep our tools lean, efficient, and up-to-date. We're excited to share the latest updates to Phx.tools, the complete development environment for Elixir and Phoenix. If you’ve been following our journey since the initial release (as documented in our previous blog post), you’ll appreciate the enhancements we've made to streamline and modernize the toolset. Removal of Unnecessary Software One of the primary goals of this update was to eliminate any bloatware that didn’t contribute directly to the development workflow. We took a closer look at the included software packages and some of the removed packages are: Chrome and Chromedriver: While these tools are useful in some contexts, they aren't always needed for the Elixir and Phoenix development tasks. By removing them, we've reduced the overall footprint of Phx.tools, making it more efficient and less resource-intensive. Docker: Docker is a powerful tool, but it’s not a necessity for all developers. Recognizing that not every project requires containerization, we’ve removed Docker to simplify the environment. Developers who need it can still easily install it separately. These removals not only slim down the installation but also reduce potential security vulnerabilities and maintenance overhead. Updated Software Versions In addition, all the remaining software has been updated to their latest versions. This ensures you have access to the most recent features and improvements, providing a more robust and up-to-date development setup. mise: A Superior Replacement for asdf In this update, we've also replaced asdf with mise as our tool for managing language and package versions. Mise, available at mise.jdx.dev, offers a more streamlined and efficient experience compared to asdf. Performance: Mise is optimized for speed, significantly reducing the time it takes to switch between versions of Elixir, Erlang, or other tools. It also installs multiple tools in parallel. This performance boost helps you maintain your flow without the delays often encountered with asdf. Simplicity: Mise has a more intuitive setup and fewer dependencies, making it easier to configure and use. Unlike asdf, which often requires additional tools like direnv to manage environment variables, mise natively reads your .env file, eliminating the need for external software and simplifying your workflow. Erlang Build Support: When building Erlang, mise automatically takes into account your ~/.kerlrc configuration file, ensuring that your custom settings are applied seamlessly. By adopting mise, we've made Phx.tools faster and more user-friendly, ensuring that you have the best possible tools at your disposal. Shell Flexibility Perhaps the most user-friendly update is the change in how we handle shell environments. Previously, we "forced" users to adopt the Zsh shell. While Zsh offers many powerful features, we recognized that forcing a specific shell setup could disrupt developers who were accustomed to their existing environments. Phx.tools now automatically detects your current shell configuration and uses it, whether you’re working with Bash, Zsh, or another shell. This change ensures a smoother, more personalized experience, allowing you to work in the environment you’re most comfortable with. Conclusion The latest update to Phx.tools represents our commitment to creating a streamlined, up-to-date, and user-friendly development environment. By removing unnecessary software, updating the remaining tools, and introducing shell flexibility, we’ve made Phx.tools more efficient and adaptable to your needs. We’re excited to see how these changes enhance your development experience. As always, we welcome your feedback and look forward to continuing to evolve Phx.tools to meet the needs of the Elixir community. Stay tuned for more updates, and happy coding!
Amos kibet
Post Image

Zero downtime deployments with Fly.io

If you were wondering why you saw the topbar loading for ~5 seconds every time you deployed to Fly.io, you’re at the right place. We need to talk about deployment strategies. Typically, there are several, but Fly.io supports these: immediate rolling bluegreen canary   The complexity and price go from low to high as we go down the list. The default option is rolling. That means, your machines will be replaced by new ones one by one. In case you only have one machine, it will be destroyed before there’s a new one that can handle requests. That’s why you’re waiting to be reconnected whenever you deploy. You can read more about these deployment strategies at https://fly.io/docs/apps/deploy/#deployment-strategy.   We’re using the blue-green deployment strategy as it strikes a balance between the benefits, cost, and ease of setup.   If you’re using volumes, I have to disappoint you as the blue-green strategy doesn’t work with them yet, but Fly.io plans to support that in the future.   Setup   You need to configure at least one health check to use the bluegreen strategy. I won’t go into details. You can find more at https://fly.io/docs/reference/configuration/#http_service-checks.   Here’s a configuration we use:   [[http_service.checks]] grace_period = &quot10s&quot interval = &quot30s&quot method = &quotGET&quot path = &quot/health&quot timeout = &quot5s&quot   Then, add strategy = “bluegreen” under [deploy] in your fly.toml file: [deploy] strategy = &quotbluegreen&quot   and run fly deploy.   That’s it! You probably expected the setup to be more complex than this. So did I!   Conclusion   While Fly.io is moving you from a blue to a green machine, your websocket connection will be dropped, but it will quickly reestablish. You shouldn’t even notice it unless you have your browser console open or you’re navigating through pages during the deployment.   One thing you should keep in mind, though, is that your client-side state (form data) might be lost if you don’t address that explicitly.   Another thing to think about is the way you run Ecto migrations. In case you’re dropping tables or columns, you might want to do that in multiple stages. For example, you might introduce changes in the code so you stop depending on specific columns or tables and deploy that change. After that, you can have subsequent deployment for the structural changes of the database. That way, both blue and green machines will have the same expectations regarding the database structure.   The future will bring us more options for deployment. Recently, Chris McCord teased us with hot deploys.   https://x.com/chris_mccord/status/1785678249424461897   Can’t wait for this!   This was a post from our Elixir DevOps series.
Almir Sarajčić
Post Image

Feature preview (PR review) apps on Fly.io

In this blog post, I explain how we approach manual testing of new features at Optimum.   Collaborating on new features with non-developers often requires sharing our progress with them. We can do quick demos in our dev environment, but if we want to let them play around on their own, we need to provide them with an environment facilitating that. Setting up a dev machine is easy thanks to phx.tools: Complete Development Environment for Elixir and Phoenix, but pulling updates in our projects still requires basic git knowledge.   We could solve this by deploying in-progress stuff to the staging server, but that becomes messy in larger teams, so we stay away from that. Instead, we replicate the production environment for each feature we are working on and we only deploy the main branch with finished features to the staging. With an environment created specifically for the feature we are working on, we can be sure nothing will surprise us after shipping it. Automated tests help with that too, but we still like doing manual checks just before deploying to production.   Heroku review apps Back when I was working on Ruby on Rails apps and websites, as many, I chose Heroku as my PaaS (platform-as-a-service). It had (and still has) a great feature called Review apps. App deployment pipeline on Heroku   It enables you to create new Heroku environments in your pipeline for the PRs in your GitHub repo, either manually through their UI or automatically using a configuration file. You can configure the dynos, environment variables, addons… any prerequisite for running your application. This was a great experience when I worked with Ruby, but when I moved to Elixir Heroku didn’t fit me anymore, so I moved to Fly.io.   Fly.io PR review apps Fly.io introduced something similar using GitHub Actions: https://github.com/superfly/fly-pr-review-apps. It’s not as powerful and is not as user-friendly, but it’s a good starting point for building your workflows. Here’s the official guide: https://fly.io/docs/blueprints/review-apps-guide/.   The current version forces you to share databases and volumes between different PR review apps. We didn’t want that, so last year my colleague Amos introduced a fork that solves this, accompanied by the blog post How to Automate Creating and Destroying Pull Request Review Phoenix Applications on Fly.io. We’ve also added some minor changes there. Some of them were implemented upstream since then, yet the setup for the database and volume is still missing. Here’s the diff: https://github.com/superfly/fly-pr-review-apps/compare/6f79ec3a7d017082ed11e7c464dae298ca75b21b...optimumBA:fly-preview-apps:b03f97a38e6a6189d683fad73b0249c321f3ef4a.   Examples We use preview apps for our phx.tools website. Although it doesn’t use DB and volumes, it’s still a good example of setting preview apps up on Fly.io: https://github.com/optimumBA/phx.tools/blob/main/.github/github_workflows.ex.   Here’s the code responsible for preview apps: @app_name &quotphx-tools&quot @environment_name &quotpr-${{ github.event.number }}&quot @preview_app_name &quot#{@app_name}-#{@environment_name}&quot @preview_app_host &quot#{@preview_app_name}.fly.dev&quot @repo_name &quotphx_tools&quot defp pr_workflow do [ [ name: &quotPR&quot, on: [ pull_request: [ branches: [&quotmain&quot], types: [&quotopened&quot, &quotreopened&quot, &quotsynchronize&quot] ] ], jobs: elixir_ci_jobs() ++ [ deploy_preview_app: deploy_preview_app_job() ] ] ] end defp pr_closure_workflow do [ [ name: &quotPR closure&quot, on: [ pull_request: [ branches: [&quotmain&quot], types: [&quotclosed&quot] ] ], jobs: [ delete_preview_app: delete_preview_app_job() ] ] ] end defp delete_preview_app_job do [ name: &quotDelete preview app&quot, &quotruns-on&quot: &quotubuntu-latest&quot, concurrency: [group: &quotpr-${{ github.event.number }}&quot], steps: [ checkout_step(), [ name: &quotDelete preview app&quot, uses: &quotoptimumBA/fly-preview-apps@main&quot, env: [ FLY_API_TOKEN: &quot${{ secrets.FLY_API_TOKEN }}&quot, REPO_NAME: @repo_name ], with: [ name: @preview_app_name ] ], [ name: &quotGenerate token&quot, uses: &quotnavikt/github-app-token-generator@v1.1.1&quot, id: &quotgenerate_token&quot, with: [ &quotapp-id&quot: &quot${{ secrets.GH_APP_ID }}&quot, &quotprivate-key&quot: &quot${{ secrets.GH_APP_PRIVATE_KEY }}&quot ] ], [ name: &quotDelete GitHub environment&quot, uses: &quotstrumwolf/delete-deployment-environment@v2.2.3&quot, with: [ token: &quot${{ steps.generate_token.outputs.token }}&quot, environment: @environment_name, ref: &quot${{ github.head_ref }}&quot ] ] ] ] end defp deploy_job(env, opts) do [ name: &quotDeploy #{env} app&quot, needs: [ :compile, :credo, :deps_audit, :dialyzer, :format, :hex_audit, :prettier, :sobelow, :test, :test_linux_script_job, :test_macos_script_job, :unused_deps ], &quotruns-on&quot: &quotubuntu-latest&quot ] ++ opts end defp deploy_preview_app_job do deploy_job(&quotpreview&quot, permissions: &quotwrite-all&quot, concurrency: [group: @environment_name], environment: preview_app_environment(), steps: [ checkout_step(), delete_previous_deployments_step(), [ name: &quotDeploy preview app&quot, uses: &quotoptimumBA/fly-preview-apps@main&quot, env: fly_env(), with: [ name: @preview_app_name, secrets: &quotAPPSIGNAL_APP_ENV=preview APPSIGNAL_PUSH_API_KEY=${{ secrets.APPSIGNAL_PUSH_API_KEY }} PHX_HOST=${{ env.PHX_HOST }} SECRET_KEY_BASE=${{ secrets.SECRET_KEY_BASE }}&quot ] ] ] ) end defp delete_previous_deployments_step do [ name: &quotDelete previous deployments&quot, uses: &quotstrumwolf/delete-deployment-environment@v2.2.3&quot, with: [ token: &quot${{ secrets.GITHUB_TOKEN }}&quot, environment: @environment_name, ref: &quot${{ github.head_ref }}&quot, onlyRemoveDeployments: true ] ] end defp fly_env do [ FLY_API_TOKEN: &quot${{ secrets.FLY_API_TOKEN }}&quot, FLY_ORG: &quotoptimum-bh&quot, FLY_REGION: &quotfra&quot, PHX_HOST: &quot#{@preview_app_name}.fly.dev&quot, REPO_NAME: @repo_name ] end defp preview_app_environment do [ name: @environment_name, url: &quothttps://#{@preview_app_host}&quot ] end   If you’re wondering why you’re seeing Elixir while working with GitHub Actions you should read our blog post on the subject: Maintaining GitHub Actions workflows.   Let’s explain what we’re doing above. We are running the pr_workflow when PR is (re)opened or when any new changes are pushed to it. It runs our code checks and tests, and, if everything passes, runs the deploy_preview_app_job.   GitHub Actions workflow for PRs   The deploy_preview_app_job uses action for deploying preview apps to Fly.io which checks if the server is already set up. If it isn’t, it creates the server, sets environment variables, etc. Then it deploys to it.   Preview app creation job that includes a DB and/or a volume doesn’t differ from the one above at all. That’s because our action optimumBA/fly-preview-apps internally checks whether an app contains Ecto migrations and if it does, it creates a DB if it doesn’t exist yet. The same goes for the volume: it checks whether the fly.toml configuration contains any mounts and if it does, it creates a volume, then attaches it to the app.   GitHub workflow for the website you’re on   Preview app for one of the PRs   We set environment to let GitHub show the preview app in the list of environments. It will show the status of the latest deployment in the PR. We don’t want too much noise in our PR from the deployment messages, so whenever we deploy a new version, we remove previous messages in the delete_previous_deployments_step.   List of deployments on GitHub   Deployment status message in the PR   Setting concurrency makes sure that two deployment jobs can’t run simultaneously for the same value passed to it. That prevents hypothetical race condition with multiple pushes, where for some reason deployment job for the latest commit could finish more quickly than the one for the previous commit, which would leave us with an older version of the app running.   Don’t forget to set GitHub secrets like FLY_API_TOKEN. You might want to do that on the organization level so you don’t have to do that for every repo. The token we’ve set in our GitHub organization is for a Fly.io user we’ve created specifically for deployments to staging and preview apps. We have a separate Fly.io organization it is a part of, so even if the token gets leaked, our production apps are safe as it doesn’t have access to them.   When we’re done working on a feature, we want to clean up our environment. It might seem strange that we use the same action to delete our app, but the action handles it by checking the event that triggered the workflow and acts accordingly. It destroys any associated volume and/or database, then the server. The next two steps of the delete_preview_app_job delete a GitHub environment. For some reason known to GitHub, the process is more complicated than it should be, but Amos explains it well in his blog post.   Getting back to the part about databases. Recently, the upstream version of the action was updated with an option to attach an existing PostgreSQL instance from Fly.io, but that still doesn’t solve potential issues with migrations. Let’s say you remove a table in one PR, while another PR depends on the same table. It will be deleted while deploying the first PR which will in turn cause errors for the second PR’s review app. Our solution avoids that by creating a completely isolated environment for each PR.   Additionally, Fly.io recently introduced (or we’ve just discovered) the ability to stop servers after some time of inactivity. That proved useful for us in lowering the cost when having many PRs open. In your fly.toml you probably want to set [http_service] auto_start_machines = true auto_stop_machines = true min_machines_running = 0   so your machines stop if you don’t access them for some period. We haven’t found a way to stop DBs for inactive apps yet. We weren’t eager to do so, though, because we’ve always had the smallest instances for the preview apps DBs. Only our apps sometimes have larger instances which incur greater costs, so we see a benefit in stopping them when we don’t use them.   More customization Some applications might require setting up additional resources. In the StoryDeck app, one of the services we use is Mux.   When a user uploads a video, we upload it to Mux, which sends us events to our webhooks. Whenever we create a new preview app, we need to let Mux know the URL of our new webhook. In theory, this could be solved by a simple proxy. In reality, it’s more complicated than that. We don’t want all our preview apps to receive an event when a video is uploaded from any of them. To know which preview app to proxy an event to, the proxy app would need to store associations between specific videos and preview apps they were uploaded from, but we don’t want to store that kind of data in the proxy app. Mux enables having many different environments in one account, which is perfect for us as each environment is a container for videos uploaded from one preview app. What is not perfect is the fact that currently there’s no API for managing Mux environments, so we have to do it through the Mux dashboard. We’ve built the proxy app using Phoenix. It has a simple API on which we receive requests sent from GitHub Actions using curl. When a new preview app is created, a request is received, then the app goes through the Mux dashboard using Wallaby, creates a new Mux environment, sets up the webhook URL, gets Mux secrets, and returns it so that the GitHub Actions workflow can set them in our new Fly.io environment. When deleting the preview app, our workflow sends a request to our proxy app which then deletes videos from Mux and deletes the Mux environment.   Creating Mux environment and saving credentials in GitHub Actions cache   That is just one example of what it might take to enable preview apps in your organization. It could seem like unnecessary work, but think of it as an investment into higher productivity and quality of work down the line.   This was a post from our Elixir DevOps series.
Almir Sarajčić

Portfolio

  • Phx.tools

    Powerful shell script designed for Linux and macOS that simplifies the process of setting up a development environment for Phoenix applications using the Elixir programming language. It configures the environment in just a few easy steps, allowing users to start the database server, create a new Phoenix application, and launch the server seamlessly. The script is particularly useful for new developers who may find the setup process challenging. With Phoenix Tools, the Elixir ecosystem becomes more approachable and accessible, allowing developers to unlock the full potential of the Phoenix and Elixir stack.

    Phx.tools
  • Prati.ba

    Bosnian news aggregator website that collects and curates news articles from various sources, including local news outlets and international media. The website provides news on a variety of topics, including politics, sports, business, culture, and entertainment, among others.

    Prati.ba
  • StoryDeck

    StoryDeck is a cloud-based video production tool that offers a range of features for content creators. It allows users to store and archive all their content in one location, track tasks and collaborate easily with team members, and use a multi-use text editor to manage multiple contributors. The platform also offers a timecode video review feature, allowing users to provide precise feedback on video files and a publishing tool with SEO optimization capabilities for traffic-driving content.

    StoryDeck