Table of Contents
1. Preface 1.1 Introduction 1.2 Goals 1.3 Features 1.4 Tenets 1.5 Design principals 1.6 Framework architecture 1.7 Scenarios and use cases
2. Introduction 2.1 Developer's work
3. Setup 3.1 Getting started 3.1.1 Glossary 3.1.2 Pre-requisites 3.1.3 Steps to get started 3.1.4 Time to start the development
3.2 Project structure 3.2.1 Scaffolding & Project structure
3.3 Configuration 3.3.1 Introduction 3.3.2 Environment variables 3.3.3 Static variables
3.4 Tests 3.5 Auto watch and build
4. CLI 4.1 Functionality 4.2 Installation 4.3 Options 4.4 Commands: Outside the dev container 4.5 Commands: Inside the dev container
5. Swagger Specs 5.1 CLI command to generate documentation 5.2 Custom Server URL
6. Events 6.1 Event types 6.2 Event schema & examples for supported sources 6.2.1 JSON schema validation 6.2.2 HTTP event 6.2.3 Kafka event
7. Workflows 7.1 The structure of workflows 7.2 The tasks within workflows 7.3 Location and fully qualified name (id) of workflows and functions 7.4 Referencing a workflow within an event or another workflow 7.5 Use of Coffee/JS for scripting
7.6 Inbuilt functions 7.6.1 com.gs.http 7.6.2 com.gs.kafka 7.6.3 com.gs.datastore 7.6.4 com.gs.elasticgraph 7.6.5 com.gs.transform 7.6.6 com.gs.series 7.6.7 com.gs.parallel 7.6.8 com.gs.switch 7.6.9 com.gs.each_sequential 7.6.10 com.gs.each_parallel 7.6.11 com.gs.return 7.6.12 com.gs.log 7.6.13 com.gs.dynamic_fn 7.6.14 com.gs.aws 7.6.15 com.gs.redis 7.6.16 com.gs.if, com.gs.elif, com.gs.else
7.7 Developer written functions 7.8 Headers defined at workflow level
7.9 File Upload feature 7.9.1 Workflow spec to upload files with same file key 7.9.2 Workflow spec to upload multiple files with different file keys 7.9.3 Workflow spec to upload file directly from URL
8. Datasources 8.1 Introduction 8.1.1 Datasource types
8.2 Datasources
8.2.1 Before and after hooks to datasource calls
8.3 API datasource 8.3.1 API datasource schema defined externally 8.3.2 API datasource schema defined within the yaml file 8.3.3 Headers defined at datasource level 8.3.4 Headers defined at task level 8.3.5 Example usage
8.4 Datastore as datasource 8.4.1 Schema specification 8.4.2 CLI Commands 8.4.3 Prisma Datastore Setup 8.4.4 Auto generating CRUD APIs from data store models 8.4.5 Sample datastore CRUD task
8.5 Kafka as datasource 8.5.1 Example spec
8.6 Elasticgraph as datasource 8.6.1 Folder Structure 8.6.2 Datasource DSL 8.6.3 Configuration files for elasticgraph 8.6.4 Elasticgraph Setup 8.6.5 Auto generating CRUD APIs for elasticgraph
8.7 Extensible datasources 8.7.1 Datasource definition 8.7.2 Example spec for the event 8.7.3 Example spec for the workflow
8.7 AWS as datasource 8.7.1 Example spec 8.7.2 com.gs.aws workflow
8.9 Redis as datasource 8.9.1 Example spec
8.10 RabbitMQ as datasource 8.10.1 Example spec
9. Caching 9.1 Specifications 9.1.1 Datasource spec for redis 9.1.2 Configuration 9.1.3 Workflow spec
10. Mappings 10.1 Project structure 10.2 Sample mappings 10.3 Use mappings constants in other mapping files
11. Plugins 11.1 Project structure 11.2 Sample plugins 11.3 Sample workflow using plugins
12. Authentication & Authorization 12.1 Authentication 12.1.1 JWT Configuration 12.1.2 Event spec 12.1.3 Generate JWT 12.1.4 Datasource authentication
12.2 Authorization 12.2.1 Workflow DSL 12.2.2 Sample DB query call authorization
13. Telemetry 13.1 Introduction 13.1.1 Architecture
13.2 Goals 13.3 Configuration 13.3.1 OTEL exporter endpoint 13.3.2 OTEL service name
13.3.3 Logging 13.3.3.1 Log level 13.3.3.2 Log fields masking 13.3.3.3 Log format 13.3.3.4 Add custom identifiers in logs
13.4 Custom metrics, traces and logs (BPM) 13.4.1 DSL spec for custom metrics 13.4.2 DSL spec for custom trace 13.4.3 DSL spec for custom logs
13.5 Observability Stack 13.6 Recommended model for telemetry signals
14. Custom Middleware 14.1 How to add custom middleware in Godspeed
16. FAQ 16.1 What is the learning curve of the microservice framework? 16.2 What is the development process and quality metrics? 16.3 How can we adopt new versions of used technology easily and fast? For example, the new Postgres release. 16.4 How easy is it to add new technology in place of an existing one, or add something absolutely new and unique (not existing in the framework) ? 16.5 Which databases are currently supported? What is the roadmap for future support? 16.6 Does the API handle DB transactions? 16.7 How can apps be decoupled or loosely coupled with DBs? 16.8 When using Godspeed service alongside SpringBoot, what will be the impact on performance with another hop, versus direct connection with DB from Spring Boot? 16.9 What is the strategic advantage of making DB queries through Godspeed? 16.10 How to achieve multi-tenancy in DBs, for a single application? 16.11 How can we start adopting the Godspeed framework? 16.12 How to move out of the Godspeed framework? Can we have a two door exit? I.e. Can we move out of technology and data both? 16.13 How will we prevent unified CRUD API from limiting or choking us? 16.14 What kind of API standards does the framework support? 16.15 Why Rest first approach ? Why not Graphql first approach? 16.16 How are we doing testing given there is quite a bit of custom DSL in the framework. How do we ensure the correctness? 16.17 How will the upgrades and migrations be done to the framework? 16.18 How CRUD APIs will support the paid as well as the non paid features of databases such as MongoDB. For example: MongoDB free vs paid versions will support different features. 16.19 How to ship new models easily?