Foundations of a Backend System for a Modern Business

The Picket platform aims to simplify the home inspection process, to make it as easy as filling out a form, or better yet, for your realtor to do it for you. It lets homebuyers easily connect with local, qualified home inspectors on their terms, thereby removing the guesswork and inconvenience that often come with finding and scheduling inspections. Both homebuyers and inspectors have the flexibility to choose the timing and location of their inspection, and the Picket teams ensures all on-platform home inspectors are licensed and qualified. The purpose of this document is to provide insight into the technical infrastructure and decision-making behind Picket’s backend system, focusing on the various challenges encountered and the solutions devised during its development. I wrote the first line of Picket on February 25th of 2020, and worked on it and all its parts for more than three years. The goal is to share the experiences and lessons learned throughout this development process, highlighting the technologies used and how I would change my approach if I were to do it again.

Problem

The traditional home inspection industry is fraught with complexities and coordination challenges, involving multiple stakeholders such as realtors, homebuyers, home sellers, and inspectors. For most individuals, buying a home is a rare occurrence, perhaps only happening a few times in their lifetime. This infrequency means that the majority of homebuyers lack expertise in navigating the home inspection process, further compounded by the critical timing often required in these transactions. The coordination between the various parties can be cumbersome, leading to delays and increased stress for homebuyers who are under pressure to move quickly. Given these challenges, there is a clear need for innovation in the way homebuyers connect with qualified inspectors. A streamlined, efficient system not only has the potential to reduce the stress associated with timing and lack of familiarity but also ensures that all parties are adequately informed and prepared, leading to a more transparent and effective inspection process. The design of this interaction and the idea to solve this problem with technology was conceived by the Picket management team, having a realtor, inspector, and entrepeneur on the team. I joined as the technologist, and what follows are some of the decisions I made to create their vision.

Services

In developing the backend for Picket, the selection of external services was strategically aligned with the principles of reliability, ease of setup, and robust support. Given the necessity to also develop a frontend website component, choosing a technology stack centered around the JavaScript ecosystem was nearly obvious. This prerequisite meant that only services with Node.js support, and ideally TypeScript definitions, were considered. Consequently, platforms such as Heroku and AWS were selected for their reputable standing in hosting and storage solutions, complemented by well-documented and maintained Node.js and NPM integrations that streamline development workflows. Stripe was chosen for payment processing, thanks to its solid reputation, extensive documentation, and secure, efficient transaction handling. Twilio was incorporated for SMS communication services, leveraging its comprehensive API library and strong community support to enable seamless text messaging integration. Other services like Bitly and SendGrid were similarly chosen for their straightforwardness and efficiency in URL shortening and email delivery, respectively. These services not only offer the advantage of quick setup and user-friendliness but also stand out for their dependable performance, supported by thorough documentation and active communities on platforms such as StackOverflow. This carefully selected ecosystem of proven services ensures that developers, in this case me, have access to an extensive pool of knowledge and troubleshooting resources, thereby enhancing the development process’s overall smoothness and efficiency.

Tools

The decision to utilize Node.js, NPM, and TypeScript was primarily influenced by the need to concurrently develop a frontend and ensure robust integrations with the aforementioned services, significantly narrowing the still crazily broad horizon for backend tool selection. The tooling for GraphQL on both the backend and frontend was, and continues to be, incredibly productive. Despite the initial overhead of setting up a GraphQL server and the need for diligent monitoring of query efficiency, the benefits of using TypeScript annotations to define GraphQL schemas—which, in turn, facilitated the frontend’s ability to generate queries—were deemed sufficient to justify its selection. Apollo Server Express emerged as the standout choice due to its exceptional community support and comprehensive documentation. The adoption of Express was reflexive, given its ubiquity in the JavaScript ecosystem, making it an almost subconscious choice due to its widespread use.

For potential support of more complex authentication scenarios and to streamline the identity system, Redis was chosen to enable a stateful model. In retrospect, this was perhaps unnecessary, as no chat features were ultimately implemented, and a stateless model would have been adequate. Nonetheless, it did not introduce any negative consequences. At that time, TypeORM was a relatively new package for database interaction, not yet having reached version 1.0. Despite the risks associated with its nascent state, it was the best available option and has since matured into a robust tool with a wide range of advanced features. The ability to use SQLite locally for development was particularly appealing for its simplicity in database management, allowing for direct file access to view and modify data, a convenience preferred over more complex setups like connecting to PostgreSQL through pgAdmin. Nonetheless, PostgreSQL was the natural choice for production due to its scalability, seamless integration with TypeORM, and support on Heroku.

The inception of the Picket project was guided by an understanding that the full scope of required functionalities could not be entirely predicted from the start. To address this, the development strategy emphasized adaptability, allowing for quick iterations and modifications as new needs emerged. This approach led to the adoption of code-generation, which when paired with GraphQL, led to some ineffeciencies and challenges. A system was designed to automate the generation of queries for resources and their interconnections, and it allows developers to easily access nested relationships up to a specified “depth.” Despite its advantages in facilitating dynamic data retrieval, this method raised concerns over query efficiency and the risk of circular references, but given its productivity and that we avoided any “premature” optimization, ultimately it was a success, as it facilitied the quick development and changes described below.

Features

As Picket matured, it was made to serve four distinct user groups, each with tailored interfaces and functionalities:

  • Administrators had access to a comprehensive dashboard, providing a holistic view of the platform’s data.
  • Customers engaged with the platform through a streamlined, “logged out” experience, primarily via email and SMS communications.
  • Inspectors were granted access to a dashboard focused on managing their job assignments.
  • Realtors had the capability to refer jobs and inspectors, favoring those within their professional network.

The diverse needs of these user groups underscored the necessity for a robust communication framework to ensure transparency and facilitate seamless audits of interactions. To this end, a sophisticated communication log system was established for administrators to review, capturing all user interactions and supported by a versatile templating system. This system enabled real-time modifications to email and SMS templates, allowing for the rapid integration of business insights into communication strategies.

Operational agility was further augmented by configurable business parameters, including inspection lead times, cancellation policies, and referral percentages, among others. These adjustable settings empowered Picket to swiftly adapt to changing business landscapes and user demands. Additionally, the integration of the US government’s ZIP code database provided a mechanism for selectively enabling or disabling service locations. Administrators could activate services for entire states, specific counties, or individual ZIP codes.

To refine targeting and incentivize customer engagement, a promotional code system was implemented. This mechanism allowed for adjustments to service estimates in a way that preserved inspector compensation, ensuring that promotional activities did not compromise service provider interest.

The overarching aim was to equip administrators with the tools to implement business modifications based on real-time feedback—whether it involved activating services in new ZIP codes, updating email content, or confirming the timely delivery of SMS messages to the intended recipients. This comprehensive and adaptable system was central to Picket’s mission, enabling it to respond quickly to market demands and user feedback.

Challenges

Addressing the challenges of rapidly evolving requirements on the backend, we encountered the inherent rigidity of database schemas and endpoints when faced with changing needs. Despite these hurdles, TypeORM’s migration generation tool proved invaluable, enabling fluid adjustments as requirements evolved. This flexibility was complemented by the frontend’s ability to generate code directly from the GraphQL schema, simplifying the process of adding new fields, updating types, and removing obsolete data.

Reflecting on the decision-making process, my principal oversight was opting for Redis for authentication when a stateless approach would have been entirely adequate. At the time, still honing my skills in Node.js backend development, I explored various options and ultimately made an ill-suited choice. I anticipated benefits from utilizing an in-memory database that, in hindsight, were premature and introduced unnecessary complexity. This experience highlighted a missed opportunity to leverage Node.js backend boilerplates, which, had I been more familiar with or had access to better options at the time, could have provided a solid foundation for authentication and session management, simplifying the development process.

Time management presented another significant challenge, particularly in programming backend logic to respond dynamically to specific time-based events, such as notifications preceding inspections or following job offers to inspectors. To avoid the inefficiency of continuously running cron jobs, I adopted strategies that averaged time ranges from their originating events. This approach, while practical, sometimes resulted in closely spaced notifications, a compromise between efficiency and responsiveness.

The evolving nature of business requirements underscored the necessity for a system that could adapt without constant developer intervention. The tools I developed were effective at empowering administrators to tailor the platform to their needs. It should be noted however, that the more things can change, the higher the potential for inconsistent or unexpected behavior, given that it becomes difficult to test interconnected parts. All things considered, however, and to transition into the lessons I learned from Picket, I recommend considering that if a value can be a configuration variable, it can probably just as easily be a runtime variable in the database which a qualified user other than a developer can modify, so long as they understand the consequence.

Lessons

Developing Picket immersed me into the nuances of Node.js backend systems, offering lessons ranging from database design and the limitations of Object-Relational Mapping Systems—like the inevitable reliance on raw SQL—to the significance of keeping up-to-date with the Long-Term Support (LTS) versions of Node.js, and the intricacies of delivering dynamically changing information in a RESTful manner. The experience yielded valuable methodological insights that have significantly shaped my approach to subsequent projects. As the landscape of backend development tools continues to evolve, becoming increasingly accessible, I’ve learned the importance of scrutinizing developments that veer away from direct business logic. Aspects such as “translation layers,” “module organization,” or any endeavor that verges on creating a “framework” warrant a cautious approach by default.

In today’s context, where development tools have reached remarkable levels of sophistication, any effort spent on non-business logic elements on the backend should prompt a consideration of existing high-quality, open-source projects that may have already addressed similar challenges. Reflecting on this, should I embark on redeveloping Picket’s backend, my choice would lean towards utilizing Nest.js and a corresponding Nest.js boilerplate. Nest.js, functioning as a comprehensive “meta-framework” for Node.js, employs an “Angular-like” methodology for structuring modules and dependencies. This insight emerged from my personal experiences with constructing a custom “micro-framework,” where the minor, yet persistent, inconveniences and the complexities of managing growing inconsistencies became evident. The approach of categorizing features into folders based on their concerns led to a web of interdependencies, at times cyclical, which, when combined with type-checking and transpilation, introduced complex debugging challenges. Nest.js, designed with foresight for such issues, provides robust mechanisms for their resolution, along with default and tested integrations for GraphQL, RESTful APIs, Swagger/OpenAPI, TypeORM, and more.

However, this approach comes with its own set of challenges: the risk of over-reliance on external code without a thorough understanding of its inner workings. Striking a balance between leveraging existing solutions and developing custom ones is no simple task. It demands an ongoing process of learning and reflection to determine when to utilize external resources and when unique issues necessitate tailored solutions. This equilibrium is pivotal in backend development, where differentiating between the use of available tools and the need for innovation is crucial to ensuring both the quality and efficiency of the work. To distill the advice succinctly: stick to business logic.

At the risk of assertions on non-technical topics I am not as comfortable with, I’d like to speculate briefly about how this maxim should affect decision-making for business people. Management plays a pivotal role in guiding the direction of development efforts, ensuring that resources are allocated efficiently and aligned with the company’s goals. To this end, if I were making business decisions about development, the questions I would keep at the front of my mind are something like:

  1. “Do we really need this feature?”
  2. “Can we validate this quickly?”
  3. “What do users think of this?”
  4. “Is this something we can soft launch?”

The connection between these strategic considerations and the earlier technical discussion is clear: writing code is a significant investment for any business, fraught with risks and costs. By concentrating development efforts on solving real business problems—rather than getting bogged down in purely technical challenges and nice-to-haves—companies can maximize the impact of their programming resources. This strategic alignment not only optimizes resource utilization but also enhances the potential for product success in the market.

Conclusion

The development of the Picket backend system has equipped administrators with an array of powerful tools designed to streamline the inspection process for realtors, inspectors, and home-buyers, enhancing ease, quality, and transparency. Beyond facilitating inspections, the system harbors the potential to deliver additional value to home-buyers by offering referrals to a wide range of home-improvement services, covering everything from basement upgrades to roof repairs. However, the effectiveness of these tools and the realization of this value-added potential hinge critically on user acquisition and engagement. Continuous feedback from users is indispensable for identifying and addressing the most pressing problems they face.

This underscores a fundamental lesson learned from the Picket development experience: the importance of focusing on the business layer and ensuring that it reaches users as swiftly as possible. Rapid deployment and user feedback loops are essential, not just for validating the utility of developed features, but also for guiding the development process towards solutions that genuinely resonate with user needs. Without this direct line of feedback, even the most well-intentioned and technically sophisticated solutions risk missing the mark, solving problems that may not align with the actual needs or pain points of the end-users.

The post Foundations of a Backend System for a Modern Business appeared first on Paul Jones.