Creating Real-Time Audio Applications

With Pro Punk Drums, I created a system for loading drum samples into a digital audio workstation (DAW) plugin, using the JUCE framework, which is a popular choice for developing audio applications. The core objective here was to get a digital instrument I could use to quickly demo ideas without using my drums and all the recording gear. Moreover, I wanted the ability to mix the drums using EQ filtering and dynamic range compression to achieve a superior sound quality. Admittedly, if everything goes well, I can even repurpose the code for other samples if possible.

The initial step was a technical endeavor of a different kind—recording and editing all the different drum instruments I wanted for the sampler. After accounting for multiple shells, cymbals, microphones, variations, articulations, and velocities, there were 360 individual samples. This number may seem large to some, or perhaps modest in comparison to other samplers, but I assure you it was extremely laborious to edit. Despite a deliberate effort to keep the collection manageable, it indeed felt overwhelming.

Loading Samples

After exporting the edited samples, I stored them in a directory and began work on the plugin itself. I started by iterating through a list of binary resources generated by the JUCE framework when files are placed in a special folder (JUCE’s BinaryData system). These files are named in a way that allows my program to infer their content and how to load them into a custom synthesizer. This synthesizer then understands how to playback the samples, accounting for characteristics such as velocity and variation. This involves parsing the sample name to identify the type of drum sound it represents, as well as any specific characteristics like the velocity layer or variation it belongs to. This is crucial for enabling dynamic playback in the plugin, where different samples might be triggered based on how hard or soft a note is played, or to introduce variation in the drum sounds.

In this process, I compare each sample’s name against a list of known drum sounds, choosing a naming convention based on the General Midi’s “Percussion” specification. Although it’s probably the oldest and most well-known MIDI specification for drum sounds, it’s perfectly suited for my needs with a little creative mapping. Once a match is found, I dissect the sample name further to extract metadata such as the variation index, velocity index, and an optional microphone identifier, all encoded in the filenames during the sample creation process.

Once a sample is successfully identified and its properties determined, I proceed to configure audio parameters like default gain, pan, and phase for each sample. These parameters are essential for how the sample sounds in the mix, allowing adjustments in volume, stereo positioning, and phase inversion, respectively. Finally, each sample is added to a synthesizer object corresponding to its determined MIDI note, effectively loading the sample into the plugin’s synthesizer, ready for MIDI-triggered playback.

Processing the Playback

In this audio plugin, the processBlock function transforms incoming MIDI messages into sound akin to a natural drum kit. The function handles 8 main channels, which are independent audio streams including Kick, Snare, Toms, High Hat, Cymbals, Other, Room, and Output. Except for special channels like Output and Room, each channel processes MIDI messages using additional processors, stored in internal buffers. Effects such as reverb, equalization (EQ), and compression are dynamically adjusted per channel based on user input. The resulting audio for each channel is then mixed into the main output buffer, crafting the nuanced, expressive output heard by users.

Shipping It

When distributing the plugin, it’s important to note that different DAWs support various plugin formats. Most support the Virtual Studio Technology (VST) format, while Avid Pro Tools and Apple’s Logic Pro use Audio Units (AU) and Avid Audio eXtension (AAX) formats, respectively. JUCE abstracts these formats into a unified API, facilitating the generation of these files. Additional steps are required on macOS due to Apple’s security policies, including code-signing and notarization to comply with macOS Gatekeeper requirements. This ensures the software doesn’t contain malicious content and verifies its integrity even without an internet connection. The notarization process, combined with a seamless workflow provided by JUCE, ensures Pro Punk Drums is accessible and secure for users across both Windows and macOS platforms.

In summary, Pro Punk Drums is a digital instrument plugin that captures the essence of a live drum kit for digital audio workstations. By meticulously sampling and editing the drum sounds, and integrating them into a sophisticated digital instrument using JUCE, this plugin offers an authentic and versatile rock drum sound, available for free download and compatible with most DAWs.

Get the source code here.

The post Creating Real-Time Audio Applications appeared first on pauljonescodes.

Experimental Frontend Application Development

Far from being simple, simple is hard.

A calculator is the kind of simple everyone understands, and so is the idea that food is nutritious, and maybe even making calculations of the nutrients in food. JavaScript, as as far programming languages go, is also simple. If things get complex, the JavaScript ecosystem offers an abundance of choices. Further, web browsers are extraordinarily complex software tools, enabling the creation of applications ranging from spreadsheets to 3D games—tasks that once demanded specialized, standalone software. As a full-stack developer, most of my projects have been networked CRUD applications, which interact RESTfully with backend services and display information to users through a UI framework. While I find this development paradigm enjoyable and efficient, I’ve been eager to push the boundaries of my web development knowledge. Thus, I embarked on a personal project using unconventional tools, leading to the creation of Nutrition Planner—a venture into less explored (by me) areas of web development, and this is a record of one of those trips.

Painting pixels

At its core, Nutrition Planner is a calculator, perhaps even a simple one. The app is built around just two primary data structures: an item and a sub-item. An item is characterized by an ID, date, nutritional information (such as calories, serving size, and macronutrients), and its cost in cents. A sub-item, on the other hand, consists of an ID, a quantity, and the ID of its associated item. Leveraging these fundamental structures, I aimed to develop an application that encompasses a log, an item library, a recipe creator, and a planner.

However, my goals extended beyond merely creating an application capable of operating within a single session, displaying information to the user, and then erasing its memory upon reload. I want an app that can synchronize across multiple devices, accessible whether on the web, mobile, or desktop platforms, With these dual objectives of simplicity and comprehensive platform integration in mind, I embarked on the search for the appropriate tools.

Starting with my preference for utilizing robust front-end frameworks such as Bootstrap, Foundation, MaterialUI, Ant Design, etc., all of these are great tools. Of particular note is their size, which by modern standards isn’t at all large, these frameworks are highly optimized and offer remarkable capabilities. However, their complexity is undeniable, encompassing extensive classes, lines of code, API, and parameter counts. In most professional settings, leveraging and customizing these tools often demands a full-time specialist’s attention.

Contrastingly, Chakra UI presents a refreshing deviation from this norm. It might appear almost simplistic or “toy-like” at first glance, but this observation isn’t meant as criticism. Instead, I find its simplicity refreshing. Chakra UI strikes a balance, offering just enough opinion to guide design without overwhelming the user with superfluous features. It provides straightforward components for common needs like GridModalTableButton, and Form. Compared to other frameworks I’ve used, working with Chakra UI feels akin to using “stock” React—it’s intuitive, efficient, and unobtrusive, maintaining a “normal” look and feel. I stumbled across it while learning about web development one day, and decided to use it for this project.

This decision led me to commit to using React for this project. Although Chakra UI doesn’t exclusively require React, and can be adapted to other frameworks or libraries, my decision was influenced by React’s widespread adoption and my positive past experiences with it. This combination of React’s versatility and Chakra UI’s simplicity seemed perfectly suited for the development of my application, aligning with my goal of creating a user-friendly and accessible project.

Multi-platform

After deciding on Chakra UI as the UI framework for the project, my next step was to identify a suitable project structure or boilerplate that could streamline the development process, addressing aspects such as page layout, routing, building, and deploying the application. Starting with the latter concern, it was around this time that I became interested in Electron, a project created by GitHub. Electron facilitates the use of the Chromium engine as an “application runtime,” enabling developers to distribute their web applications as standalone, “headless browser”-like desktop applications. Essentially, it allows JavaScript applications to run independently outside of the traditional web browser environment.

I was attracted to Electron as a solution for encapsulating a simple website within a package that mimics the appearance and behavior of a native desktop application, especially when paired with a native font stack to give that “home-y” feel. The opinions on using Electron for such purposes are varied, with some critics suggesting that a website should suffice for most cases. While I understand and even agree with these arguments, my choice to explore Electron was driven by a desire for experimentation and the unique learning experience it offers, rather than out of any practical necessity. By adopting Electron, I aimed to create an application that not only serves its intended purpose but also provides a distinct user experience, akin to that of a native application, purely for the enjoyment and challenge it presents.

Addressing the desktop experience through Electron was just one part of my plan. Equally important was my goal to support mobile devices in a manner that feels native. To bridge this gap, I aimed to ensure that the web application, while operational within Electron for desktop environments, would also function as a “progressive web app” (PWA) for mobile users. PWAs have been written about extensively elsewhere, so I won’t re-iterate how great they can be here, the links provided should be sufficient.

Meta-frameworks

Transitioning to the development phase where both desktop and mobile platforms are supported, my next challenge involved crafting the website that would double as the application in its “native” form. Given my choice of Chakra UI with its React integration, it became imperative to find a solution for managing the application’s routing and behavior within the React ecosystem. Previously, I had relied on React-specific tools like React Router to simulate a content management system using JSX, a process I often found cumbersome.

During my exploration of potential solutions, I discovered Next.js. Hardly a small-time niche tool, its comprehensive documentation and adoption by numerous reputable companies caught my attention, prompting me to experiment with it in this project. Next.js appealed to me as a learning opportunity, even if it was somewhat of a departure from the project’s primary requirements. In retrospect, while Next.js introduced me to concepts like server-side rendering and authentication—features not directly relevant to my project’s goals—it provided a straightforward mechanism for defining routes and behaviors. For example, creating a file named items.tsx effortlessly generated a corresponding URL /items and facilitated easy linking within the app.

Having gained more experience with Next.js since then, I’ve come to view it as somewhat of an overkill for simpler projects, given its expansive features set that includes multiple routing and rendering methods. This complexity can sometimes contribute to a bloated developer experience, straying from the simplicity I initially sought. Nevertheless, Next.js proved to be a valuable tool, enabling me to define and implement the application’s structure and navigation effectively, even if it wasn’t the perfect fit in hindsight. Its utility in this context underscores the importance of selecting the right tools based on the specific needs and goals of a project, a lesson that has informed my approach to web development moving forward. Additionally, while I’ve read that Vercel’s (the company behind Next.js) deployment solution gets expensive at large scales, but is perfect for the hobbyist with an open-source project.

Reactive database

Having established the means to deploy the application across web, mobile, and desktop platforms, and to create UI elements that are consistent across these environments, my next challenge was to manage data in a way that supported both offline functionality and optional data synchronization with an external endpoint. My goal was to create an offline-first application that allowed users the flexibility to integrate their own backend solutions if they chose to do so. This led me to discover RxDB, a solution that perfectly aligned with my project requirements.

RxDB is a reactive, offline-first database library designed for real-time applications. It supports a variety of storage backends and offers seamless replication capabilities. Initially, RxDB could use PouchDB for local data storage, leveraging the browser’s IndexedDB for data persistence. This setup facilitated straightforward replication with a remote CouchDB server, providing a sync mechanism that was both efficient and easy to implement. The architecture of RxDB, with its emphasis on reactivity and offline accessibility, made it an ideal choice for my project.

However, the learning curve was steep, especially with the intricacies of configuring PouchDB and CouchDB, as well as understanding the underlying IndexedDB storage mechanism. The introduction of major version 14 of RxDB brought significant changes, including a dedicated CouchDB replication plugin and improved support for IndexedDB, either directly or via Dexie.js. These updates aimed to simplify the database management experience and expand the library’s capabilities. For the first version of the app, which used RxDB 12, I used its PouchDB plugin configured to store locally in IndexedDB, replicating to CouchDB if there was an available URL configured.

Getting started

With the architectural foundation for the application firmly in place, the next step was the actual development work. This phase was significantly expedited thanks to discovering Nextron, a project that seamlessly integrates Next.js, Electron, and Chakra UI (or other UI frameworks). Nextron provided a pre-configured template that bridged these technologies, offering a straightforward starting point for the project. This integration facilitated the creation of an application that could run on desktop environments via Electron, while also leveraging the design and development efficiencies of Next.js and Chakra UI.

Upon incorporating RxDB along with the necessary RxDB Hooks for React, I began developing the data layer and UI components of the application. The UI components were deliberately kept simple to maintain the project’s focus on ease of use and straightforward functionality. One notable exception was the implementation of an “infinite scroll” mechanism for rendering a table, inspired by the “table view” in iOS or “list view” on Android. Although a data table with navigation buttons might have been a more conventional, and manageable, choice for this purpose, I opted for the infinite scrolling approach to preserve the application’s simplicity for the user.

Additionally, I integrated “react-big-calendar” (interestingly from the same programmer who maintains Yup, and a fellow New Jerseyan) to quickly set up the log view, aiming for a user-friendly and visually appealing interface for managing and viewing entries. This choice, while perhaps unconventional, proved effective, enabling the rapid deployment of a functional log view with minimal debugging required.

The development process, guided by the principles of simplicity and functionality, highlighted the value of selecting the right tools and libraries to meet the project’s goals. By combining the strengths of Nextron, RxDB, Chakra UI, and other React components, I was able to create an application that not only met the initial requirements but also offered a seamless and intuitive user experience across desktop and mobile platforms.

The first sign of trouble

The initial foray into developing the data layer of the application revealed an unexpected and challenging aspect that complicated the integration with TypeScript: the recursive nature of the data structure. Its recursive because Items have Sub-items which have Items-and so on. This recursive design was essential for achieving the desired functionality, where “recipes” or “groups” could contain not only items but also other groups. Similarly, “plans” could be comprised of items, groups, or a combination thereof, with “logs” including plans, groups, or items. Despite all these entities ultimately being treated as Item objects, their recursive relationships posed a significant challenge for type validation within TypeScript.

This complexity was compounded by the use of Yup schemas for form validation and the corresponding definition of RxDB schemas for database structuring and versioning. The recursive data model led to an issue where TypeScript’s type validation became infinitely recursive, making it impossible to complete at “compile” time. This problem had no direct impact on the correctness of the application’s functionality but presented a significant obstacle for type-checking during development.

As a result, I was forced to disable TypeScript’s type-checking to proceed with development. This workaround, however, came with drawbacks, notably undermining many of the advantages TypeScript offers, such as enhanced code reliability and developer productivity through static type-checking. Additionally, the issue adversely affected the development environment’s tooling, causing operations like code formatting and the application of “quick fixes” to experience intolerable delays.

Navigating this challenge highlighted the complexities of working with recursive data types. Despite these hurdles, the project moved forward, albeit with concessions made in terms of the development experience and the benefits typically afforded by TypeScript’s type safety features.

I also discovered another more minor problem with my data model, and that’s that while it was desirable to change the “downstream” prices and nutrition in the case of modifying recipes and plans, it was not desirable for logs. The need to preserve the integrity of past entries, under the obvious premise that “you can’t change the past,” led to the implementation of a deep-copy system for logging purposes. This system ensured that any modifications made to items or groups would be accurately reflected in current and future plans without retroactively affecting the historical records in the log.

Despite the difficulties posed by the absence of type-checking, I persevered, successfully bringing the application to a functional state. Users could add items to the database by importing nutrition information, create groups from these items (and potentially other groups), form plans from these groups and items, and generate logs from all the aforementioned entities. These logs were then elegantly displayed on a calendar, offering users a comprehensive view of their activities and plans.

The architectural decisions made throughout the development process resulted in a versatile application that could be experienced as a native desktop application, a progressive web app on mobile devices, and a fully-featured website. The project, thus, achieved its goal of creating a unified, cross-platform solution that leverages modern web technologies to provide a seamless, user-friendly experience.

A syncing feeling

The second significant challenge I encountered was related to data synchronization, specifically with integrating CouchDB as the replication backend for the Nutrition Planner. Having no prior experience with CouchDB, which was still relatively new at the time of implementation, presented a steep learning curve. Initially, CouchDB offered a more lenient security model, including default settings where all users had “admin” rights—a convenience feature that had recently been revised, complicating user management.

I managed to deploy CouchDB on Linode and configured it for secure network access to enable replication. However, one notable limitation was the absence of an automated process for provisioning new users directly via a URL; instead, it required manual intervention through Fauxton to create each new user. This aspect was somewhat disappointing, given my goal for a more seamless user experience.

Despite this hurdle, the flexibility of CouchDB offered a compelling advantage: the application could connect to any functional CouchDB endpoint for data replication. This architecture allowed for a “personal cloud” experience, where users had the option to utilize a managed service backend provided by the application or to “bring their own” backend. This approach empowered users with full control over their data, enhancing the application’s privacy and customization options. Users could not only ensure the security of their data but also leverage it across other applications if desired.

This dual capability—offering a managed service for ease of use or allowing users to host their own CouchDB instances—highlighted the application’s versatility and its potential to serve a wide range of user preferences and needs. It underscored the project’s commitment to user data autonomy and privacy, providing a foundation for a more personalized and secure user experience.

A year or so later

Everything described before this was the initial development work, which at time of writing was over a year ago. Bringing the narrative up-to-date, as of a few days ago, I decided to update Nutrition Planner with the latest reasonable dependencies, for the same of maintaining it in something of a presentable state. The transition away from using PouchDB as an intermediary for CouchDB synchronization, as dictated by the major changes in RxDB version 14, marked a significant pivot in the application’s development. RxDB’s decision to drop PouchDB support was based on its implementation’s performance issues, a move I found entirely reasonable given the aim for efficiency and reliability in data synchronization.

The relative lack of widespread adoption for CouchDB, compared to other database services, and the complexities involved in its deployment and configuration, further validated the need for a shift in the application’s backend strategy. CouchDB’s niche appeal, primarily among enterprises and enthusiasts, underscored the importance of exploring more accessible and user-friendly options for data replication.

Fortunately, RxDB’s support for Firebase Firestore as an alternative replication target presented a viable path forward. Coupled with the option to use Dexie for interacting with IndexedDB, this transition seemed straightforward—at least in theory. The process involved updating the application’s dependencies and replacing the PouchDB and CouchDB integration with Dexie and Firestore, respectively. With Firebase, users could create their own instance and add the relevant parameters in the settings tab of the application. While this is a (relatively) good user experience, because it’s very easy to spin up a Firestore, it comes with the obvious and inevitable risk that one day Google will kill or charge for Firebase.

This is not a big deal relative to my next finding, unfortunately. While the application successfully writes data to Firestore, it seems to be unable to pull from Firestore. This problem persisted despite the configuration appearing correct, with the occasional exception of using an in-memory store for document management. Such a workaround, though effective for ensuring data receipt from Firestore, was impractical for the application’s intended offline-first and data-intensive use case. There will be some line of code responsible for this error, probably even one I wrote, but I think the fundamental cause is more intangible, more to do with the fact that there’s IndexedDB, with its own quirks, and Dexie.js, with its own quirks, and Firebase Firestore, with its own quirks. Ideally, there would be some software solution that used IndexedDB to persist locally, and when available, sync that IndexedDB to somewhere remote if the user sets one. While not at all an option in this environment, Apple’s “Core Data with CloudKit” strikes me as an enviable API.

Throw away one

Reflecting on the journey and the choices made, there are several areas where I would consider alternative approaches if I were to embark on this project anew, even with its status as a technological sandbox. One such reconsideration would be the infinite scrolling table used to display items, groups, and plans. While functional, its performance degrades with larger data sets, primarily due to the lack of virtualization, risking memory overflow and degraded user experience.

Additionally, while Nextron provided an invaluable starting point with its seamless integration of Next.js and Electron, its lag behind the latest versions of Next.js introduces potential limitations. This factor, combined with my reservations about the full suitability of Next.js for this project, prompts me to explore other boilerplates, such as the Electron React Boilerplate, known for its robust support and up-to-date practices.

Despite these considerations, my familiarity with Next.js and its routing capabilities, honed through extensive use, suggests its retention in future iterations of the project. Separately, my experience with Material UI has grown, recognizing its value as a comprehensive UI framework despite its relative heft. The framework’s data table component, in particular, stands out for its functionality and could address some of the current application’s UI limitations. For a more fun and mobile-minded application, I’d be interested to try Framework7.

Navigating through the complexities of modern web development, as highlighted in the journey of creating the Nutrition Planner, reveals that paradox I opened with. The process of rendering UI elements and managing data storage presents its own set of challenges, yet these tasks pale in comparison to the intricacies of user management and data replication across devices and platforms.

While drawing pixels on a screen is a well-understood problem with numerous effective solutions, and similarly, storing information in IndexedDB, perhaps less common than other storage methods, has still myriad good tools. However, my quest for an open-source, easily deployable solution for user provisioning and seamless replication of IndexedDB data remains a significant challenge. While RxDB and Dexie offer paid features for data synchronization in their premium offerings, and CouchDB provides a framework close to the ideal, there remains a gap in the ecosystem (or at very least my knowledge of it) for a solution that combines ease of deployment with the flexibility and security necessary for user data management. (And while Firebase’s Firestore is basically what I want, its disqualified for idealogical reasons.)

The exploration of these technologies, despite the occasional sense of being overwhelmed by the vast array of tools and frameworks, enriched my understanding of what is possible. And having spilled all this ink documenting my research into creating a simple application to calculate the cost and nutrition of my lunch, I return to that too familiar paradox I opened with:

Far from being simple, simple is hard.

The post Experimental Frontend Application Development appeared first on pauljonescodes.

Foundations of a Backend System for a Modern Business

The Picket platform aims to simplify the home inspection process, to make it as easy as filling out a form, or better yet, for your realtor to do it for you. It lets homebuyers easily connect with local, qualified home inspectors on their terms, thereby removing the guesswork and inconvenience that often come with finding and scheduling inspections. Both homebuyers and inspectors have the flexibility to choose the timing and location of their inspection, and the Picket teams ensures all on-platform home inspectors are licensed and qualified. The purpose of this document is to provide insight into the technical infrastructure and decision-making behind Picket’s backend system, focusing on the various challenges encountered and the solutions devised during its development. I wrote the first line of Picket on February 25th of 2020, and worked on it and all its parts for more than three years. The goal is to share the experiences and lessons learned throughout this development process, highlighting the technologies used and how I would change my approach if I were to do it again.

Problem

The traditional home inspection industry is fraught with complexities and coordination challenges, involving multiple stakeholders such as realtors, homebuyers, home sellers, and inspectors. For most individuals, buying a home is a rare occurrence, perhaps only happening a few times in their lifetime. This infrequency means that the majority of homebuyers lack expertise in navigating the home inspection process, further compounded by the critical timing often required in these transactions. The coordination between the various parties can be cumbersome, leading to delays and increased stress for homebuyers who are under pressure to move quickly. Given these challenges, there is a clear need for innovation in the way homebuyers connect with qualified inspectors. A streamlined, efficient system not only has the potential to reduce the stress associated with timing and lack of familiarity but also ensures that all parties are adequately informed and prepared, leading to a more transparent and effective inspection process. The design of this interaction and the idea to solve this problem with technology was conceived by the Picket management team, having a realtor, inspector, and entrepeneur on the team. I joined as the technologist, and what follows are some of the decisions I made to create their vision.

Services

In developing the backend for Picket, the selection of external services was strategically aligned with the principles of reliability, ease of setup, and robust support. Given the necessity to also develop a frontend website component, choosing a technology stack centered around the JavaScript ecosystem was nearly obvious. This prerequisite meant that only services with Node.js support, and ideally TypeScript definitions, were considered. Consequently, platforms such as Heroku and AWS were selected for their reputable standing in hosting and storage solutions, complemented by well-documented and maintained Node.js and NPM integrations that streamline development workflows. Stripe was chosen for payment processing, thanks to its solid reputation, extensive documentation, and secure, efficient transaction handling. Twilio was incorporated for SMS communication services, leveraging its comprehensive API library and strong community support to enable seamless text messaging integration. Other services like Bitly and SendGrid were similarly chosen for their straightforwardness and efficiency in URL shortening and email delivery, respectively. These services not only offer the advantage of quick setup and user-friendliness but also stand out for their dependable performance, supported by thorough documentation and active communities on platforms such as StackOverflow. This carefully selected ecosystem of proven services ensures that developers, in this case me, have access to an extensive pool of knowledge and troubleshooting resources, thereby enhancing the development process’s overall smoothness and efficiency.

Tools

The decision to utilize Node.js, NPM, and TypeScript was primarily influenced by the need to concurrently develop a frontend and ensure robust integrations with the aforementioned services, significantly narrowing the still crazily broad horizon for backend tool selection. The tooling for GraphQL on both the backend and frontend was, and continues to be, incredibly productive. Despite the initial overhead of setting up a GraphQL server and the need for diligent monitoring of query efficiency, the benefits of using TypeScript annotations to define GraphQL schemas—which, in turn, facilitated the frontend’s ability to generate queries—were deemed sufficient to justify its selection. Apollo Server Express emerged as the standout choice due to its exceptional community support and comprehensive documentation. The adoption of Express was reflexive, given its ubiquity in the JavaScript ecosystem, making it an almost subconscious choice due to its widespread use.

For potential support of more complex authentication scenarios and to streamline the identity system, Redis was chosen to enable a stateful model. In retrospect, this was perhaps unnecessary, as no chat features were ultimately implemented, and a stateless model would have been adequate. Nonetheless, it did not introduce any negative consequences. At that time, TypeORM was a relatively new package for database interaction, not yet having reached version 1.0. Despite the risks associated with its nascent state, it was the best available option and has since matured into a robust tool with a wide range of advanced features. The ability to use SQLite locally for development was particularly appealing for its simplicity in database management, allowing for direct file access to view and modify data, a convenience preferred over more complex setups like connecting to PostgreSQL through pgAdmin. Nonetheless, PostgreSQL was the natural choice for production due to its scalability, seamless integration with TypeORM, and support on Heroku.

The inception of the Picket project was guided by an understanding that the full scope of required functionalities could not be entirely predicted from the start. To address this, the development strategy emphasized adaptability, allowing for quick iterations and modifications as new needs emerged. This approach led to the adoption of code-generation, which when paired with GraphQL, led to some ineffeciencies and challenges. A system was designed to automate the generation of queries for resources and their interconnections, and it allows developers to easily access nested relationships up to a specified “depth.” Despite its advantages in facilitating dynamic data retrieval, this method raised concerns over query efficiency and the risk of circular references, but given its productivity and that we avoided any “premature” optimization, ultimately it was a success, as it facilitied the quick development and changes described below.

Features

As Picket matured, it was made to serve four distinct user groups, each with tailored interfaces and functionalities:

  • Administrators had access to a comprehensive dashboard, providing a holistic view of the platform’s data.
  • Customers engaged with the platform through a streamlined, “logged out” experience, primarily via email and SMS communications.
  • Inspectors were granted access to a dashboard focused on managing their job assignments.
  • Realtors had the capability to refer jobs and inspectors, favoring those within their professional network.

The diverse needs of these user groups underscored the necessity for a robust communication framework to ensure transparency and facilitate seamless audits of interactions. To this end, a sophisticated communication log system was established for administrators to review, capturing all user interactions and supported by a versatile templating system. This system enabled real-time modifications to email and SMS templates, allowing for the rapid integration of business insights into communication strategies.

Operational agility was further augmented by configurable business parameters, including inspection lead times, cancellation policies, and referral percentages, among others. These adjustable settings empowered Picket to swiftly adapt to changing business landscapes and user demands. Additionally, the integration of the US government’s ZIP code database provided a mechanism for selectively enabling or disabling service locations. Administrators could activate services for entire states, specific counties, or individual ZIP codes.

To refine targeting and incentivize customer engagement, a promotional code system was implemented. This mechanism allowed for adjustments to service estimates in a way that preserved inspector compensation, ensuring that promotional activities did not compromise service provider interest.

The overarching aim was to equip administrators with the tools to implement business modifications based on real-time feedback—whether it involved activating services in new ZIP codes, updating email content, or confirming the timely delivery of SMS messages to the intended recipients. This comprehensive and adaptable system was central to Picket’s mission, enabling it to respond quickly to market demands and user feedback.

Challenges

Addressing the challenges of rapidly evolving requirements on the backend, we encountered the inherent rigidity of database schemas and endpoints when faced with changing needs. Despite these hurdles, TypeORM’s migration generation tool proved invaluable, enabling fluid adjustments as requirements evolved. This flexibility was complemented by the frontend’s ability to generate code directly from the GraphQL schema, simplifying the process of adding new fields, updating types, and removing obsolete data.

Reflecting on the decision-making process, my principal oversight was opting for Redis for authentication when a stateless approach would have been entirely adequate. At the time, still honing my skills in Node.js backend development, I explored various options and ultimately made an ill-suited choice. I anticipated benefits from utilizing an in-memory database that, in hindsight, were premature and introduced unnecessary complexity. This experience highlighted a missed opportunity to leverage Node.js backend boilerplates, which, had I been more familiar with or had access to better options at the time, could have provided a solid foundation for authentication and session management, simplifying the development process.

Time management presented another significant challenge, particularly in programming backend logic to respond dynamically to specific time-based events, such as notifications preceding inspections or following job offers to inspectors. To avoid the inefficiency of continuously running cron jobs, I adopted strategies that averaged time ranges from their originating events. This approach, while practical, sometimes resulted in closely spaced notifications, a compromise between efficiency and responsiveness.

The evolving nature of business requirements underscored the necessity for a system that could adapt without constant developer intervention. The tools I developed were effective at empowering administrators to tailor the platform to their needs. It should be noted however, that the more things can change, the higher the potential for inconsistent or unexpected behavior, given that it becomes difficult to test interconnected parts. All things considered, however, and to transition into the lessons I learned from Picket, I recommend considering that if a value can be a configuration variable, it can probably just as easily be a runtime variable in the database which a qualified user other than a developer can modify, so long as they understand the consequence.

Lessons

Developing Picket immersed me into the nuances of Node.js backend systems, offering lessons ranging from database design and the limitations of Object-Relational Mapping Systems—like the inevitable reliance on raw SQL—to the significance of keeping up-to-date with the Long-Term Support (LTS) versions of Node.js, and the intricacies of delivering dynamically changing information in a RESTful manner. The experience yielded valuable methodological insights that have significantly shaped my approach to subsequent projects. As the landscape of backend development tools continues to evolve, becoming increasingly accessible, I’ve learned the importance of scrutinizing developments that veer away from direct business logic. Aspects such as “translation layers,” “module organization,” or any endeavor that verges on creating a “framework” warrant a cautious approach by default.

In today’s context, where development tools have reached remarkable levels of sophistication, any effort spent on non-business logic elements on the backend should prompt a consideration of existing high-quality, open-source projects that may have already addressed similar challenges. Reflecting on this, should I embark on redeveloping Picket’s backend, my choice would lean towards utilizing Nest.js and a corresponding Nest.js boilerplate. Nest.js, functioning as a comprehensive “meta-framework” for Node.js, employs an “Angular-like” methodology for structuring modules and dependencies. This insight emerged from my personal experiences with constructing a custom “micro-framework,” where the minor, yet persistent, inconveniences and the complexities of managing growing inconsistencies became evident. The approach of categorizing features into folders based on their concerns led to a web of interdependencies, at times cyclical, which, when combined with type-checking and transpilation, introduced complex debugging challenges. Nest.js, designed with foresight for such issues, provides robust mechanisms for their resolution, along with default and tested integrations for GraphQL, RESTful APIs, Swagger/OpenAPI, TypeORM, and more.

However, this approach comes with its own set of challenges: the risk of over-reliance on external code without a thorough understanding of its inner workings. Striking a balance between leveraging existing solutions and developing custom ones is no simple task. It demands an ongoing process of learning and reflection to determine when to utilize external resources and when unique issues necessitate tailored solutions. This equilibrium is pivotal in backend development, where differentiating between the use of available tools and the need for innovation is crucial to ensuring both the quality and efficiency of the work. To distill the advice succinctly: stick to business logic.

At the risk of assertions on non-technical topics I am not as comfortable with, I’d like to speculate briefly about how this maxim should affect decision-making for business people. Management plays a pivotal role in guiding the direction of development efforts, ensuring that resources are allocated efficiently and aligned with the company’s goals. To this end, if I were making business decisions about development, the questions I would keep at the front of my mind are something like:

  1. “Do we really need this feature?”
  2. “Can we validate this quickly?”
  3. “What do users think of this?”
  4. “Is this something we can soft launch?”

The connection between these strategic considerations and the earlier technical discussion is clear: writing code is a significant investment for any business, fraught with risks and costs. By concentrating development efforts on solving real business problems—rather than getting bogged down in purely technical challenges and nice-to-haves—companies can maximize the impact of their programming resources. This strategic alignment not only optimizes resource utilization but also enhances the potential for product success in the market.

Conclusion

The development of the Picket backend system has equipped administrators with an array of powerful tools designed to streamline the inspection process for realtors, inspectors, and home-buyers, enhancing ease, quality, and transparency. Beyond facilitating inspections, the system harbors the potential to deliver additional value to home-buyers by offering referrals to a wide range of home-improvement services, covering everything from basement upgrades to roof repairs. However, the effectiveness of these tools and the realization of this value-added potential hinge critically on user acquisition and engagement. Continuous feedback from users is indispensable for identifying and addressing the most pressing problems they face.

This underscores a fundamental lesson learned from the Picket development experience: the importance of focusing on the business layer and ensuring that it reaches users as swiftly as possible. Rapid deployment and user feedback loops are essential, not just for validating the utility of developed features, but also for guiding the development process towards solutions that genuinely resonate with user needs. Without this direct line of feedback, even the most well-intentioned and technically sophisticated solutions risk missing the mark, solving problems that may not align with the actual needs or pain points of the end-users.

The post Foundations of a Backend System for a Modern Business appeared first on Paul Jones.