Skip to content

Micro/NanoService YAML


This functionality was originally offered by the parent company as a separate product Fathomable. It is undergoing consolidation into Cyvive's core and is restricted to beta customers at this time. Please contact your account manager via live-chat should you wish to participate.


  • Scope: MicroService Specific
  • File: cyvive.yaml
  • Location: Git Repository / CI/CD Pushed


… core component of the Universal MicroServices Language. This file describes the specific MicroService Governance policy to be extracted and implemented on demand in a Cloud Agnostic Cluster.

While Cyvive doesn't mandate a specific adoption approach, a truely minimal configuration file is achieved through following industry best practices for structure and oragnisation.

Cyvive's configuration language is architecturally and contexturally derived from partnership with 3 Universities and over 35 industry publications for providing a governance via policy meta-model that agnostically interfaces with the most complex orchestration technology while articulately able to be used in conversation.

This file is a powerhouse behind agnostic infrastructure, and provides the most simple abstraction of deployment available today.


Cyvive aligns with Cloud Native and MicroServices approaches. However, this Universal MicroServices Language is flexible enough to be used with OSIMMv2 / TOGAF as such the following naming conventions apply:

MicroServices: - Suite - Micro - Nano

OSIMMv2 / TOGAF - Group -> Suite - Service -> Micro - Component -> Nano

CloudFunctions - Functions -> Nano

If your PaaS can efficiently start instances in 20ms that run for half a second, then call it serverless Adrian Cockcroft AWS VP - 2016

In the context of this quote, any well designed container with proper governance and management can be classified as a function. For this reason, Cyvive uses the terminology Nano instead of function to more appropriately cover the functions and nanoservices classifications.

Absolute Minimum File Structure


'⇛' character is used throughout this documentation where items are mandatory with respect to the parent YAML key. If not specified, then item is optional.

⇛ exampleMicro:
⇛   version: 'v1.8.x'

… this minimum structure is all that is necessary for Cyvive to commence governance and interface with the orchestration technology at a minimum level.

If Cyvive was then used to govern this application / service in a deploymentTarget called 'preproduction' using the template 'perf' against Kubernetes as a orchestrator the following would happen:

  • a Deployment would be created using the DockerHub hosted image: exampleSuite/exampleMicro:v1.8.x
  • a Service would be created mapping port 80 to the Deployment
  • a DNS entry would be created in the cluster as exampleMicro.exampleSuite.preproduction.svc.cluster.local
  • any other applications governed by Cyvive's and allowed to be deployed against this template would also be deployed following an automated dependency order.


SemVer is rapidly becoming the version management approach of choice for development as it balances the needs of devoplers and continous container deployment well while maintaining business requirements for release management and change requests.

Micro/NanoService Inheritence Structure

⇛ exampleMicro:
    nano: 'identifier'
⇛   version: 'vTag'

… the typical structure used in a governance model would provide nano, micro, suite as units of categorization. Cyvive inherits this concept where typically suite would be a higher level Business Unit correctly decomposed containing a collection of MicroServices.

The splitting point between micro and nano typically has a line of demarkation drawn between the need for an internal data model and direct algorithmic processing.

The exact definition and demarkation of suites is left to the client, however by default when requesting a container from the container repository the format will be as such: - exampleSuite/exampleMicro:vTag or - exampleSuite/exampleMicro-exampleNano:vTag if nano key has been provided

As such its possible to use Cyvive with any level of complexity in application relationships, while keeping isolation and deployment in a layered security approach.

Suite Level Configuration / Technology Descriptor

  exampleMicro: {}
  suite: {}

… is available for all suites under governance. Where applicable identical specifications at the suite level override the template level. Its not necessary for suite information to be specified as a pre-requisite to adding a Micro/NanoService.

It is possible for a Micro/NanoService to inherit configuration information from the suite level, thus allowing enhanced security over a traditional deployment model, where keys can be present in the application environment and unknown to developers with MicroService / NanoService source code access.

Additional information on the suite Technology Descriptors is available

MicroService Technology Descriptor

    availability: {}
    circuitBreaker: {}
    commandLineInterface: {}
    component: string
    daemon: boolean
    endpoint: {}
    environment: {}
    label: {}
    layer: string
    repository: {}
    resource: {}
    security: {}
    stateful: {}
    version: {}

… illustrates a high level overview of the logical configuration sections. Detailed information is located under each subheading below. Where values for keys are shown they are the default values.


        boot: 1
        stability: 0
        termination: 5
      minimum: 1
      maximum: 2
          interval: 10
          path: '/'
          port: 80
          interval: 5
          path: '/'
          port: 80
        timeout: 1
      scalingEvent: {}
  • gracePeriod: minimum time in seconds to wait for events to occur
    • boot: Micro/NanoService & operating system startup boot time, this should be the minimum time before a health check endpoint is available to process a request.
    • stability: amount of time to wait for the health check endpoint to ensure consistent returns after boot time has been completed. This is typically used in legacy applications that need to stabilize their upstream and downstream communications when started or have large amounts of data to sync.
    • termination: is the maximum amount of time to wait before hard terminating the container operating system. It does not guarantee the Micro/NanoService will have or take this long to terminate, its just the maximum amount of time to wait for the signal from the container operating system to ensure its okay to terminate. (Note: the termination signal is sent immediately to all container processes, this is the period before the kill signal is sent)
  • minimum: guaranteed minimum number of replicas to always be deployed in the deploymentTarget
  • maximum: when scaling ensure that this number is not exceeded
  • probes: notify the underlying orchestrator of Micro/NanoService status. Where /health is actual health, and /ready is ability to recieve traffic
    • interval: time in seconds for checking endpoint
    • path: endpoint path relative to container
    • port: internal port the probe endpoint is listening on
    • timeout: this option is not nested under each probe as failures are being monitored for, its expected that timeout values should apply to all check related endpoints equally.
  • scalingEvent: is a passthrough object of trigger events to integrate with orchestration support in triggering scaling up and down of the replicas.

Note: With respect to deployment timeouts, Cyvive's standard approach is to stall deployments as failed if the Micro/NanoService fails to enter ready state using one of the following timelines in order of priority rounded to the nearest second:

  1. probe.ready specified: (gracePeriod.boot + gracePeriod.stability + (probe.ready.interval * 2)) * 3.3
  2. specified: (gracePeriod.boot + gracePeriod.stability + * 3.3
  3. gracePeriod.stability: (gracePeriod.boot + gracePeriod.stability + 10) * 3.3
  4. gracePeriod.boot: (gracePeriod.boot + 10) * 3.3
  5. default settings: 33 seconds

As seen above 10% buffer is applied to times to ensure container schedulling / restarting via the orchestrator doesn't introduce false-positives



configuration structure still stabilizing


      argument: []
      command: ''

… is an override path for container start commands. While most startups will embed the start command and respective arguments in the container image itself, efficient governance exposes all configuration, execution and dependency information. As such the organisation can choose to embed startup information in the container metadata or expose via governance layer.

  • argument: standard cli arguments for execution. e.g. ['--list', '--debug']
  • command: root command to execute when starting the container. e.g. '/usr/local/bin/command' should this not be specified then the default command the container was built with will be executed.


    nano: ''

… provides a namespace separation for nanoServices. Typically used when the MicroService would need to be logicaly broken down further than is possible within the suite specification. Or when only algorithmic processing capabilities are necessary


    daemon: false

… upgrades the Micro/NanoService to run as a Daemon in the deploymentTarget. This ensures that every physical node belonging to this suite will have this Daemon available on a low-latency local network hop.


      domain: {}
      port: 80
      provide: ['/']
      scheme: 'https'
      require: []

… are created for every Micro/NanoService by default and register DNS via the following schema: nanoName-microName.suiteName.deploymentTarget.domain

  • port: open port for inbound interaction.
  • provide: important for correct dependency management the provide endpoints are the registration points for Micro/NanoService searching and generation in the deployment graph.
  • scheme: extensible against the scheme definitions in RFC standards, the key types are http and https where specifying https will cause auto-creation of SSL certificates at the cluster ingress point.

Exploring the complex root keys:

  '': ['DEVLIKE']

The domain object is structured as follows: - key: the domain name to expose against. This should be the Fully Qualified Domain Name (FQDN) as autogenerated DNS structure applies inside the cluster only. - value: array of operatingEnvironment's valid for exposing against. Exposure follows the schema mentioned previously, however it can be overridden as necessary

  - 'redux.exampleSuite:443/api/v1/ending': ['incoming']
  - 'exampleMicro.exampleSuite:80/v1/': ['incoming', 'outgoing']
  - '': ['outgoing']

The require object is quite important, and while optional, strongly recommended to always be supplied. It identifies all dependencies this Micro/NanoService has and helps contribute to the deployment order when creating a deploymentTarget. In the event a require is not registered with Cyvive it will be considered external to the cluster and assumed to already exist.

A useful note is that different versions of the same Micro/NanoService can be consumed by other eachother. This is achieved via the version key where each governance technology descriptor registers against the Micro/NanoService version.

The require object is structured as follows: - key: Uniform Resource Identifier (URI) RFC 3986 compliant. The scheme is unnecessary as any routing restrictions are scoped as above - value: traffic direction for filewall / security registration


        config: {}
        secret: {}
      variable: {}

… all items are directly exposed to the Micro/NanoService.

Exploring the complex root keys:

      mountPath: '/alpha'
        - name: configDetail
          value: 'string of information'
      inheritSuite: false

Each item in config is a representation of a ConfigMap with individual items specified in the array object under data. Each item represents an individual file. mountPath is the directory location in the container that the ConfigMap should be mounted to.

If inheritSuite is provided the configuration will be loaded from the suite settings enabling a more 'global' oriented view of configuration

      type: 'opaque'
      mountPath: '/secret-location'
        - name: secretInfo
          value: (base64 string)
      inheritSuite: false

Each item in secrets is a map with individual items under data representing files to be mounted into the mountPath location in the container.

If inheritSuite is provided the secret will be loaded from the suite settings enabling a more 'global' oriented view of configuration

  'exposeName': 'exposeValue'

Direct mapping of the key to value provided as an environmental variable when executing the container start command.

Additionally Cyvive exposes some helper variables to identify the current context of the Micro/NanoService:

  • SELF_NAME: name of the Micro/NanoService. This will also be the hostname of the running container
  • SELF_NAME_LOADBALANCER: to assist in discovery, this is the LoadBalancer endpoint for incluster communication to this container and its replicas. This is relative to the deploymentTarget and not the Fully Qualified Domain Name (FQDN)
  • SELF_DEPLOYMENTTARGET: deploymentTarget that the Micro/NanoService has been deployed into
  • SELF_IP: the internal cluster IP of the container

The following self-explanatory variables are also available to the container when specified via Cyvive's governance:



      clusterDNS: ''

… is a catch-all for compatibility with non-governed processes. It is strongly recommended not to use these keys unless absolutely necessary as each key will disable some governance functionality and introduce independent manual management scenarios that wouldn't normally be necessary.

  • clusterDNS: is a hard overwrite of the cluster internal load balancer endpoint for the Micro/NanoService. It disables the autogeneration capabilities and can help with initially migrating non Cloud Native items.


      # app:            autocompleted ~ appName
      # component:          autocompleted ~ component
      # release:        autocompleted in PRODLIKE environments ~ canary or stable
      # tier:           autocompleted ~ suiteName
      # version:        autocompleted ~ version key
      {any others you require}

Any labels not specified above can be used to help identify the applications & services. As the aforeentioned labels are reserved by Cyvive for governance, any custom values provided will be ignored as they are used for asset tracking

Although there is nothing stopping its use, the recommended approach is not to use hotfix as a label or blue / green for deploys as when running Micro/NanoServices en masse at scale, canary has been observed repeatedly as a more stable; reduced risk; and governable approach as everything passes through a 'canary' state anyway. (Under candidate based releases hotfixes are just releases that have been accelerated through the canary phase)

Additionally, Cyvive uses Shadow Traffic Replication where return values are thrown away to prevent interference with production. This further provides isolation over the standard 'canary' approach validating the safety of the entire ecosystem to promote as a validated whole.

There is no limit to how many labels can be specified


    layer: 'base'

… is a concept often used in Enterprise Architecture and earlier iterations of MicroServices. Cyvive underwent an extreemly careful active engagement process with its users prior to introducing this key.

The layer concept is used as part of the dependency graph generation process. Prioritizing and guaranteeing deployments of each layer prior to commencing the next, failing fast when any layer fails to deploy.

Layers in Order 1. data 2. communication 3. cache 4. backend 5. frontend

While strictly not necessary to specify, if known the layer should be specified as it allows for accelerated parallel deployment in the desired deploymentTarget


        domain: 'hub.docker'
        name: 'exampleRedux'
        officialImage: false
        owner: 'exampleRedux'

… image registry autogenerated name uses the format: repository/owner/name as such the default without image specified would be one of: - hub.docker/exampleSuite/exampleMicro or - hub.docker/exampleSuite/exampleMicro-exampleNano if nano key has been provided as seen earlier in repository

This can be overriden to anything you need in any combination using the following values:

  • domain: overrides suite or template domain settings
  • name: overrides exampleMicro in the sample. This impacts deployed application name & container image repository url generation.
  • officialImage: is a structureal specification for DockerHub where official images have a different retrieval structure. Setting this as true would result in exampleMicro being the official image name or if provided name would still override to be the official image name.
  • owner: override for owner in technology descriptor


        cpu: 500
        memory: 1Gi
        cpu: 300
        memory: 1Gi
      qos: ''

… allocation is an important part of all container orchestration schedullers, and these values should be provided prior to deploying Micro/NanoService to production deploymentTarget although, if not specified Cyvive will operate without issue.

  • max absolute maximum requirements that we are prepared to allocate.
  • min minimum required in order to guarantee application boot and ready for traffic interaction.

qos is mandatory should min or max be specified

If neither min or max are specified then template defaults (if specified) will be used. The ability to provide template resource defaults is to ensure safe co-habitation of Services / Applications / Components when / if they go rogue.

cpu is units of CPU core specified in 'm' thus for a single CPU core 1000 should be used. The 'm' is omitted and should not be specified. memory should always have the multiplier specified as part of the value i.e. 'Gi' any suitable value can be specified from the following: Ki Mi Gi Ti Pi Ei qos follows the approach:

  • guaranteed: highest possible level, everything not this level will suffer 'pause' events to ensure these pods continue to operate.
  • burstable: default min values are allocated to the pod as minimum required to run. No upper limits are placed on resources.
  • effort: can be used when the application is lowest priority of them all. min, max and namespace default values are totally ignored. (Currently un-implemented due to lack of user demand)



configuration structure still stabilizing

        name: alternativeOne
        reference: admin
  • account: should an account be required that isn't the suite security account or 'default' i.e. another suite's account. It can be overriden here. Specifying will create the account if it doesn't exist



configuration structure still stabilizing

      cloudNative: false
      databaseName: ''
      individualServices: false
      replica: 3
      sharedStorage: false
          mountPath: '/avolume'
          size: '10Gi'
          storageClass: ''
  • cloudNative: Enables the ability to deploy stateful applications in parallel and will automatically compact number of replicas down in envionments that aren't 'HALIKE' or 'PRODLIKE' to save resources.
  • databaseName: standard application naming will be applied if this field is omitted. Its frequently used in custom templates for configuring some of the expected internals
  • individualServices: some applications can operate under a common service endpoint, others such as MongoDB require fixed service endpoints for each database
  • replica: number of PODS that should be deployed, if the backend supports it anti-affinity rules will already be in place per Availablility Zone and Host.
  • sharedStorage: determines if the PODS should have mount the same storage or have unique storage per pod (warhing multi-mount storage is unsupported by most storage drivers)
  • storageClass: the type of storage strategy that should be applied

In providing a consistent minimal configuration the stateful configuration integrates with endpoint and it should be used for accessing accordingly

Amount of time that should be given after sending kill signal to the container OS before terminating and removing the container.


    version: 'latest'

… standards and application is a constantly debated aproach with different internal standards used within organizations. Internally Cyvive maintains governance versions based on this key and value. For effective governance of infrastructure in cloud native approaches Semantic Versioning SemVer is sanest choice. While Micro/NanoService versioning typically is best suited to ComVer

Cyvive's integration with SemVer only tracks major minor patch the extensions format is stripped off for tracking purposes.

If its necessary to modify governance information, then SemVer should be incremented to prevent cross-contamination of prior governed assets.

Container images when using SemVer are not re-pulled from the image repository each time as they should and are assumed to be immutable.

static labels i.e. latest can also be used with the understanding that configuration changes will be applied to all future deploymentTarget and container images will be re-pulled every time.