Alexandre Lemaire Alexandre Lemaire

Setting up Bamboo and PHPSPEC

Setting up PHPSPEC tests on Atlassian Bamboo can be a real pain; it's fundamentally simple, but it's so easy to kludge and the build process logs aren't that great.  This guide will help you set things up from A to Z.  

Setting up your Stages

A single stage should do.  Click on the Create Stage button, give it a name, and save.

Stages tab, in the Plan Configuration area.

Stages tab, in the Plan Configuration area.

With your Stage created, access the Tasks tab, and configure your Source Code Checkout, defining the repository it will download.  It will draw a list of repositories from any source control tool you've linked.  [If you haven't yet done this, do it before continuing]  

Next, we configure our tasks.

The Composer Build Task

  1. Click on 'Add Task'
  2. Find the 'Script' task and click 'Add'
  3. In its configuration window, give Task Description a name of 'Composer Build'
  4. Interpreter will be Shell
  5. Script Location will be Inline
  6. Your Script Body will contain:
/usr/local/bin/composer update 1>&2

Leave the remaining details blank, and click save.

 

The PHPSPEC Task

If your developers are like mine, they run their spec tests in pretty mode, but we'll need the output as junit would expect.  Configure a script task as you previously did, changing the name to be something meaningful like Run PHPSpec, yet selecting an Interpreter of /bin/sh and use this Script body to sub pretty for junit:

#!/bin/bash
sed -i -e 's/formatter.name: pretty/formatter.name: junit/g' phpspec.yml
vendor/bin/phpspec run > build/junit.xml 2>/dev/null
exit 0

The remaining fields are left blank, and you click Save.  We do things this way, because phpspec will not exit with 0 if there are errors, and this is what Bamboo considers a success.  This'll let phpspec finish, and subsequent tasks will run (such as the one where your JUNIT results are evaluated).

The JUnit Task

Lastly, you'll need to configure the test task.

  1. Click on Add Task and select the JUNIT type
  2. Give it a Task Description of Evaluate Test Results
  3. Specify custom results directories should read build/junit.xml
  4. Click Save

Storing ARtifacts

Still on the same screen, click on the Artifacts tab up top.  Then, click on the Create definition button in the left hand corner of this tab.

  1. Location should read build
  2. Copy pattern should read *.xml
  3. The Shared checkbox remains unchecked
  4. Click Save

Clover Coverage

Done with Artifacts, click on the Miscellaneous tab, and check the User Clover to collect Code Coverage for this build checkbox.  A section will expand, in which you select Clover is already integrated into this build and a clover.xml file will be produced.  Lastly, specify Clover XML Location to be build/coverage.xml.

 

Putting it all together

The rest of the magic is actually done in your code's phpspec.yml configuration, and in your project's composer.json

Your phpspec configuration would typically look like this

suites:
    Lemonade:
        namespace: Lemonade
        spec_prefix: Spec
        src_path: module/Lemonade/src/
        spec_path: module/Lemonade/bundle
#
#  .... the rest of your suites are here
#
formatter.name: pretty
extensions:
  PhpSpecCodeCoverage\CodeCoverageExtension:
    format:
      - html
      - clover
    output:
      html: build/coverage
      clover: build/coverage.xml
    whitelist:
     - module

Note the inclusion of the PhpSpecCodeCoverage\CodeCoverageExtension that's outputting into our build folder, that's keeping all .xml artifacts in Bamboo.  Install that extension through composer, by installing henrikbjorn/phpspec-code-coverage

Read More
Alexandre Lemaire Alexandre Lemaire

Testing Zend Framework Controllers, with Plugins, using PHPSpec

Quick how-to. Get your Zend Framework controller plugins into your PHPSpec examples.

This is a question I'm often asked!  I thought I'd create a reference to help those in similar need.  Cutting straight to it, tests (examples) won't work out of the box if you are relying on things like 'params' or 'auth'.  The reason is simple, when you are testing controllers with PHPSpec, the Controller didn't go through its usual init process.  Your Controller's PluginManager as a result, won't work.

Testing controllers with PHPSpec, the Controller didn't go through its usual init process.  Your Controller's PluginManager as a result, won't work.

The knee-jerk reaction might be to try to superclass the test to initialize controllers, but this is bad behavior.  You're no longer speccing if you do this, you're actually testing integration (which goes against the doctrine your tests establish).

It's actually very simple.  Consider this method (that you can readily cut and paste into your Specs)

/**
 * Convenience method to mock plugins into controller specs
 * @param array $plugins
 * @return \Prophecy\Prophecy\ObjectProphecy
 */
private function createPluginManager($plugins = [])
{
    $prophet = new Prophet();
    $pluginManager = $prophet->prophesize(PluginManager::class);

    foreach ($plugins as $name => $plugin) {
        $pluginManager->get($name, Argument::cetera())->willReturn($plugin->getWrappedObject());
    }
    $pluginManager->setController(Argument::any())->willReturn(null);
    $this->setPluginManager($pluginManager->reveal());

    return $pluginManager;
}

Now, imagine we wanted to test a function that'd eventually look like this.

public function testAction()
{
    $vm = new ViewModel();
    if ($id = $this->params()->fromRoute('id')) {
        $vm->setVariable('id', $id);
    }
    $vm->setTerminal(true);

    return $vm;
}

You would structure your test like so:

namespace Spec\Application\Controller;

use PhpSpec\ObjectBehavior;
use Prophecy\Argument;
use Prophecy\Prophet;
use Zend\Mvc\Controller\Plugin\Params;
use Zend\Mvc\Controller\PluginManager;
use Zend\View\Model\ViewModel;

class IndexControllerSpec extends ObjectBehavior
{
    /**
     * Convenience method to mock plugins into controller specs
     * @param array $plugins
     * @return \Prophecy\Prophecy\ObjectProphecy
     */
    private function createPluginManager($plugins = [])
    {
        $prophet = new Prophet();
        $pluginManager = $prophet->prophesize(PluginManager::class);

        foreach ($plugins as $name => $plugin) {
            $pluginManager->get($name, Argument::cetera())->willReturn($plugin->getWrappedObject());
        }
        $pluginManager->setController(Argument::any())->willReturn(null);
        $this->setPluginManager($pluginManager->reveal());
    }

    function it_can_use_params_now(Params $params)
    {
        $params->__invoke(Argument::any(), Argument::any())->willReturn($params);
        $params->fromRoute('id', Argument::any())->willReturn(1);
        $this->createPluginManager([
            'params' => $params,
        ]);

        $viewModel = $this->testAction();
        $viewModel->shouldHaveType(ViewModel::class);
        $viewModel->getVariable('id')->shouldBe(1);
    }
}

You can build on this to optimize your tests, calling the plugin creation method strategically.

 

Adjusting getRequest()

 

Now, you might be running into a circumstance where you need to modify 'getRequest' to satisfy your controller tests.  You'll have quickly noted that there is no 'setRequest', for a good reason.  If you are in a situation where this is required, here's a second helper that lets you trigger a 'fake' dispatch cycle to feed in a mocked request.  This method, as with createPluginManager above, can go into your Spec.  You could create a clever parent class that features these two if you are using them frequently.

/**
 * Simulate dispatching the request, lets you modify getRequest()'s response
 *
 * @param $request
 * @return mixed
 */
private function dispatchRequest($request)
{
    $prophet = new Prophet();
    $routeMatch = $prophet->prophesize(RouteMatch::class);
    $routeMatch->getParam('action', Argument::any())->willReturn('unused');

    $mvcEvent = $prophet->prophesize(MvcEvent::class);
    $mvcEvent->getName()->willReturn('unused');
    $mvcEvent->getRouteMatch()->willReturn($routeMatch);
    $mvcEvent->getRequest()->willReturn($request);
    $mvcEvent->setRequest(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->setResponse(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->setTarget(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->setName(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->stopPropagation(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->propagationIsStopped(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->setResult(Argument::any())->willReturn($mvcEvent);
    $mvcEvent->getResult()->willReturn($mvcEvent);

    $this->setEvent($mvcEvent);

    return $this->dispatch($request);
}

With that function in your spec test, and an action that looks like this:

public function testAction()
{
    $vm = new ViewModel();
    if ($id = $this->params()->fromRoute('id')) {
        $vm->setVariable('id', $id);
    }

    if( $this->getRequest()->isPost() ){
        $vm->setVariable('p', true );
    }

    $vm->setTerminal(true);

    return $vm;
}

You can then write a test that works like so:

function it_can_use_params_with_post_now(Params $params, Request $request)
{
    $params->__invoke(Argument::any(), Argument::any())->willReturn($params);
    $params->fromRoute('id', Argument::any())->willReturn(1);
    $this->createPluginManager([
        'params' => $params,
    ]);

    $request->isPost()->willReturn(true);

    $this->dispatchRequest($request);
    $viewModel = $this->testAction();
    $viewModel->shouldHaveType(ViewModel::class);
    $viewModel->getVariable('id')->shouldBe(1);
    $viewModel->getVariable('p')->shouldBe(true);
}

Hope this helps!  I'd certainly love to learn an easier way -- feel free to drop a comment below!

Read More
Alexandre Lemaire Alexandre Lemaire

Setting up a new VPC with an Elastic Load Balancer within AWS

Simple guide to setting up an AWS VPC with public/private zones and an ELB.

If you're used to EC2-Classic, or are simply new to all this -- creating a VPC-with-ELB can be a puzzling experience.  The mystery typically begins when you've set up your subnets and aren't too sure why when you add your ELB, things don't work.  If you're being greeted by a big 503 or are doing some homework before you tackle the job, this post's a good primer.

We're going to create a VPC with the necessary public and private subnets, routes, and related devices.

 

1.Create your VPC

Your first task is to create your VPC.  Head over to the AWS services panel and then click VPC to access the VPC panel.  Click on Your VPCs in the menu on the left, to access the VPC panel.  Click on Create VPC and the creation modal appears.

VPC creation modal

VPC creation modal

Specify the following details:

  • Name Tag: WhateverYouWant
  • CIDR block: 10.0.0.0/16
  • Tenancy: your decision

When created, it'll appear in the VPC list.

 

2.Create your Subnets

Subnets are address blocks within your VPC to which you can assign different routes, ACLs, and appliances. 

First we'll create two subnets into which our application ELB can spawn its balancers.  The AWS panel doesn't make this very clear, but it needs a 'sandbox' for its app balancers -- the availability zones aren't treated quite the same as they are in EC2 classic (seemingly, just zones where the instances served by the ELB exist).  You'll note as your ELB works, that the count of available IPs in your ELB subnets mysteriously diminish. 

  • Name Tag: ELB Zone 1
  • VPC: WhateverYouWant
  • Availability Zone: Pick One
  • CIDR Block: 10.0.20.0/24

Then another

  • Name Tag: ELB Zone 2
  • VPC: WhateverYouWant
  • Availability Zone: Pick A Different One
  • CIDR Block: 10.0.21.0/24

When you are done with these two:

  1. Select one using the 'square' checkbox on the left
  2. Click on the "Subnet Actions" button up top
  3. Select "Modify Auto-Assign Public IP"
  4. Set things so that it automatically assigns.
  5. Save (and repeat for the other)
Automatically assigning public IPs within subnets

Automatically assigning public IPs within subnets

These ELB subnets are going to host the Application ELB's hosted instances.  We still need to create the subnets that'll host your own application instances.  As long as they are in the same availability zone as their "mate" balancer subnets, they can talk to each other.  Create these two then, as boundaries for your eventual 'real' app servers.

  • Name Tag: Application Server Zone 1
  • VPC: WhateverYouWant
  • Availability Zone: (Same as ELB Zone 1)
  • CIDR Block: 10.0.0.0/24
  • Name Tag: Application Server Zone 2
  • VPC: WhateverYouWant
  • Availability Zone: (Same as ELB Zone 2)
  • CIDR Block: 10.0.1.0/24

 

3.Create your Internet Gateway

Creation is pretty straightforward, just give it a name.  The Internet gateway is necessary since your instances will need Internet access, e.g., apt-get, composer, etc.

Creation menu on the internet gateway panel

Creation menu on the internet gateway panel

4. Create a Route Table

To make your ELB subnets Internet accessible - you have to associate your subnets to an Internet Gateway.  Straight out of the AWS documentation:

An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IP addresses.

Head over to the Route Tables menu on the left, and click Create Route Table.  Use these inputs:

  • Name tag: Custom Route
  • VPC: WhateverYouWant (just being consistent here, same throughout this article)

After it is created, select it, and click the "Routes" tab in the bottom half.  Connect it to your Internet Gateway on route 0.0.0.0/0 like so:

Custom Route Table 1

Custom Route Table 1

Then in your Subnet Associations tab, associate it to your two public ELB subnets; ELB Zone 1, and ELB Zone 2.

5. PREPARE YOUR SECURITY GROUPS

We're going to create a few groups here.  These are the magic glue, so craft them carefully.

LoadBalancer: This group will be attached to the ELB.

Type
Protocol
Port Range
Source
HTTP (80)
TCP (6)
80
0.0.0.0/0
HTTPS (443)
TCP (6)
443
0.0.0.0/0

Outbound ok, all traffic (ALL on 0.0.0.0/0)

 

APPSERVERS: THIS GROUP IS FOR THE EC2 APPLICATION SERVERS

Type
Protocol
Port Range
Source
SSH (22)
TCP (6)
22
YOUR IP HERE
HTTP (80)
TCP (6)
80
LOADBALANCER SECGROUP
HTTPS (443)
TCP (6)
443
LOADBALANCER SECGROUP

 

6.create your elb

Head over to your EC2 panel, and create a Load balancer.  

Step 1. Define Load Balancer

During its creation, select your VPC as Create Inside value, and then select ELB Zone 1 and ELB Zone 2 as its Available subnets.  

STEP 2. assign security groups

Select the group you created previously, LOADBALANCER.

STEP 4. ASSIGN SECURITY GROUPS

Configure this per however you will structure a response on your app.  If you have no index.html, adjust it to give it something that "response" (index.php perhaps!).

STEP 5. Add EC2 Instances

Very straightforward.  Leave "Enable Cross-Zone Load Balancing" checked.  If you haven't yet launched EC2 instances into your Private groups (App groups we created above) -- just come back after you have launched them.  If you use autoscale groups or any other config, easy enough to bind them to this ELB at that point.

 

Then, do what you usually do to point a domain to your ELB, using its CNAME, or directly configuring Route 53 to use the ELB as an alias on your zone records.

 

Your next steps then, might be to:

  • Install OpenVPN through the EC2 Marketplace so that you can gain access to things like databases launched within the VPC
  • Create 2 more subnets, and a secgroup for Lambda
Read More
Alexandre Lemaire Alexandre Lemaire

AWS DataPipeline S3 to RDS using PHP

Quick lunchtime post so I don't forget how I cobbled PHP SDK and AWS DataPipeline S3 to RDS template together.

We needed to load data into RDS on the tail end of an ETL process.  The L unfortunately, was standing for "Long" instead of "Load", and that client-driven window of time that we had to load things in was being eclipsed by the rapidly growing mass of data.

AWS gives us DataPipeline for this.  It's billed as:

AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premise data sources, at specified intervals. 

If you access your AWS console and find DataPipeline, you'll see a nice splash page on startup that lets you configure your flows; luckily, there's one template specifically tailored to moving things from S3 to RDS.  Load S3 data into RDS MySQL table

Creating Your Pipeline

So, select that template, fill out all of the fields.  

One specification, under "Schedule", select "Run on pipeline activation".

When you save everything you'll get dropped onto a graphical editor.  If you're like me, here's where you probably scratching your head a bit.  We're going to ditch this panel for now.

 

Configuration Part 1 : IAM User

Hop on over to Identity & Access Management (IAM) and create a user that'll use DataPipeline. I'm going to name mine data_pipeline_agent.  

After the user is created (and you've logged its Key and Secret somewhere), associate the DataPipeline-created roles to it:

  1. Click your user in the Users list.
  2. Click on the Permissions tab
  3. Click the Attach Policy button
  4. Separately, attach AWSDataPipelineRole and AmazonEC2RoleforDataPipelineRole
  5. Attach a third policy to let the user upload to S3 (you have one?)*

*If you haven't created an S3 policy alread, you can create one based on the example below and attach it to your user. YMMV.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1448918570000",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketAcl",
                "s3:GetBucketCORS",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTorrent",
                "s3:ListBucket",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-BUCKET-NAME/*"
            ]
        },
        {
            "Sid": "Stmt1448918570001",
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketAcl",
                "s3:GetBucketCORS",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTorrent",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR-BUCKET-NAME"
            ]
        }
    ]
}

Configuration Part 2 : PHP Project

Shell into your project folder, and issue composer init to get your composer file set up.  If you haven't installed composer, you can Google composer quick-starts ad nauseam.

Next issue: composer require aws/aws-sdk-php

That'll install all of the AWS goodies that you need.

Next, create a php file to contain the script we'll write inside your project folder.  Your script file should look like this:

// composer autoload
include 'vendor/autoload.php';

$expected_filename = '20160407.csv';

try
{

    //
    // 0.  Process your CSV file here, I was dumping mine into $expected_filename
    //



    //
    // 1. Upload the file and create the Sdk instance
    //

    $sdk = new Aws\Sdk([
        'credentials' => [
           'key'    => 'IAM_KEY',  // data_pipeline_agent
           'secret' => 'IAM_SECRET',
        ],
        'region' => 'us-east-1',
        'version' => 'latest',
    ]);


    $s3Client = $sdk->createS3();
    $s3Client->putObject([
        'Bucket' => "your-bucket-name",
        'Key'    => $expected_filename,
        'SourceFile' => $expected_filename,
    ]);


    //
    // 2. Trigger DataPipeline
    //
    $client = $sdk->createDataPipeline();
    $client->activatePipeline([
        'pipelineId' => 'YourPipelineID',
        'parameterValues' => [
            [
                'id' => 'myInputS3Loc',
                'stringValue' => 's3://your-bucket-name/' .  $expected_filename,
            ],
            [
                'id' => '*myRDSPassword',
                'stringValue' => '',
            ],
            [
                'id' => 'myRDSUsername',
                'stringValue' => 'your_rds_user',
            ],
            [
                'id' => 'myRDSTableInsertSql',
                'stringValue' => 'INSERT INTO yourTable ( `user_id`, `type`, `time_recorded`, `sequence_number`, `station_id` ) VALUES ( ?, ?, ?, ?, ? )',
            ],
            [
                'id' => 'myRDSTableName',
                'stringValue' => 'your_table_name',
            ],
            [
                'id' => 'myRDSConnectStr',
                'stringValue' => 'jdbc:mysql://dbinstanceid.region.rds.amazonaws.com:3306/dbname',
            ],
            [
                'id' => 'myEc2RdsSecurityGrps',
                'stringValue' => 'any_security_groups_needed',
            ],
        ],
    ]);


}
catch( Exception $x )
{
    echo "Error: " . $x->getMessage() . "\n";
}

The snippet above is pretty self-explanatory; feel free to ask questions of course!  Here are some frustrations explained:

  1. The config in the web GUI is artificial.  Anything you pass in through the SDK will overwrite whatever you punch into the web panel.  I wasted a bit of time here thinking it was a special "fallback" type of config.  No niceties here, if you don't fully define the config at call-time, then the blank parameters truly become blank and the DataPipeline fails.
  2. The template has an asterisk in front of the RDS password.
  3. There's a pretty long lag between the second you use the SDK and the second at which the web GUI reacts.  Be patient (though admittedly, this makes debugging a PITA).

Let me know if this helps you! I found the documents and GUI to be pretty disjoint, hopefully this glues them together for you.

 

 

 

 

 

 

Read More
Alexandre Lemaire Alexandre Lemaire

Preparing your Zend Framework 3 Controllers. Bye ServiceLocator.

Lazy band-aid factory to help you migrate beyond zend-mvc 2.7.  Might save you from writing a ton of controller factories, and controller tests.

Get the Factory in this post as a part of my (free) Zend Framework 3 Auto-Wiring Package! 

The recent march toward ZF3 has signed the blockade that keeps the service locator out of all controllers.  If you've been developing a ZF2 app, you're probably fixing your composer to not install zend-mvc 2.7+.  Pause that version freeze, patch below.

The deprecation of this pivotal auto-service is ratified by an academic mix of "it's an anti-pattern", code rigor and testability - all true.  However rationalized, the real world impact can be a PITA.  If you want to stay up-to-date, you've got to face the music.  I did anyways, in looking at the bevy of controllers I had to deal with if I wanted to keep current.

Anecdotal, I hit IRC quickly to ask "how are you guys dealing with this?".  All I got was "not upgrading yet".

If you've read this far, you're in my shoes -- keep reading!

Here then, is an abstract factory that you can modify to help you get around this zend-mvc change.  All you have to do, is write constructors with the right class names - and this abstract factory will use reflection to inject your dependencies for you.  What's more, some cheat code can sub parts in, for example: if you use "array $config" or other aliases as constructor parameters, it'll get the ZF config and so forth.

The abstract factory relies on the fact that you are using ::class to declare your service manager components.  If you're not using ::class, you should!  e.g.:

'service_manager' => [
    'factories' => [
        SuperService::class => SuperServiceFactory::class,
    ],
],

Here's the abstract factory (tailed to Controllers) that uses reflection to identify dependencies.

class LazyControllerFactory implements AbstractFactoryInterface
{

    /**
     * Determine if we can create a service with name
     *
     * @param ServiceLocatorInterface $serviceLocator
     * @param                         $name
     * @param                         $requestedName
     *
     * @return bool
     */
    public function canCreateServiceWithName(ServiceLocatorInterface $serviceLocator, $name, $requestedName)
    {
        list( $module, ) = explode( '\\', __NAMESPACE__, 2 );
        return strstr( $requestedName, $module . '\Controller') !== false;
    }


    /**
     * These aliases work to substitute class names with SM types that are buried in ZF
     * @var array
     */
    protected $aliases = [
        'Zend\Form\FormElementManager' => 'FormElementManager',
        'Zend\Validator\ValidatorPluginManager' => 'ValidatorManager',
        'Zend\Mvc\I18n\Translator' => 'translator',
    ];

    /**
     * Create service with name
     *
     * @param ServiceLocatorInterface $serviceLocator
     * @param                         $name
     * @param                         $requestedName
     *
     * @return mixed
     */
    public function createServiceWithName(ServiceLocatorInterface $serviceLocator, $name, $requestedName)
    {
        $class = new \ReflectionClass($requestedName);
        $parentLocator = $serviceLocator->getServiceLocator();
        if( $constructor = $class->getConstructor() )
        {
            if( $params = $constructor->getParameters() )
            {
                $parameter_instances = [];
                foreach( $params as $p )
                {

                    if( $p->getClass() ) {
                        $cn = $p->getClass()->getName();
                        if (array_key_exists($cn, $this->aliases)) {
                            $cn = $this->aliases[$cn];
                        }

                        try {
                            $parameter_instances[] = $parentLocator->get($cn);
                        }
                        catch (\Exception $x) {
                            echo __CLASS__
                                . " couldn't create an instance of $cn to satisfy the constructor for $requestedName.";
                            exit;
                        }
                    }
                    else{
                        if( $p->isArray() && $p->getName() == 'config' )
                            $parameter_instances[] = $parentLocator->get('config');
                    }

                }
                return $class->newInstanceArgs($parameter_instances);
            }
        }

        return new $requestedName;

    }
}

Setup

Here's a run down of what your setup would look like in parts:

Abstract Factory | module.config.php

'controllers' => [
    'abstract_factories' => [
        LazyControllerFactory::class,
    ]
],

Route Setup | module.config.php

Your routes have to use class names, or it won't work.

'home' => [
        'type' => 'Literal',
        'options' => [
            'route' => '/',
            'defaults' => [
                'controller' => \Application\Controller\IndexController::class,
                'action' => 'index',
            ],
        ],
    ],

Controller | fix dependencies

Your controller constructor is where the factory discovers dependencies using reflection.  Note, in cases where you have uber complex setup, you should probably stick to a bona-fide factory.  That's ok, ZF will match factories before it does abstract factories.  The lazy factory can be your backup.

public function __construct( FormElementManager $formElementManager, ConfigurationMapper $configurationMapper, array $config )
{
    $this->formElementManager = $formElementManager;
    $this->configurationMapper = $configurationMapper;
    $this->countryConfig = $config['lemonade']['default_country'];
}

After this, it's just a matter of removing any reference to $this->getServiceLocator() in your controller code.  Remove the locator, add the dependency you were locating to the constructor; rinse and repeat.

Good luck with your migration!  I always welcome improvements and feedback!  

I kept this in post-migration.  It lets me prototype just as fast as when I had the SL readily available and makes things a bit less nebulous when comparing tests to controllers/services, etc.

Read More