Maintenance (Resolved)
  • Priority - Low
  • Affecting Server - PLESK MAGENTO
  • Thur 24/10/2019 11.00AM-11.10AM

    We will be upgrading plskmage.createhosting.co.nz as part of a series of updates that we are rolling out to all servers. We will be adding PHP 7.0 - PHP 7.3 support. The server will be rebooted as part of the upgrade process and services will be unavailable for a brief period (5 -10min) during those times. Due to the amount of changes, some disruption to services such as mail and web is to be expected.

  • Date - 24/10/2019 10:58 - 24/10/2019 13:22
  • Last Updated - 24/10/2019 11:00
Maintenance (Resolved)
  • Priority - High
  • Affecting Server - PLESK MAGENTO
  • Sat 31/08/2019 20.30PM-21.30PM

    We will be upgrading plskmage.createhosting.co.nz to Plesk Onyx as part of a series of updates that we are rolling out to all servers. The server will be rebooted a number of times as part of the upgrade process and services will be unavailable for a brief period (5 -10min) during those times. Due to the amount of changes, some disruption to services such as mail and web is to be expected.

  • Date - 31/08/2019 20:30 - 24/10/2019 10:58
  • Last Updated - 24/10/2019 10:58
Internal System Changes (Resolved)
  • Priority - Low
  • Affecting System - Client Billing & Support System
  • We will be changing our account system from Saasu to Xero during the 31 Mar 2019 EOY changeover period. This will involve multiple connected internal systems, as well as our externally facing Client Billing and Support System. Access to our Client Billing and Support system may be restricted / inaccessible during the changeover. It is expected to take a few days to transition over.

    Please reach out to us directly during this time if you require any service or accounting support.

  • Date - 01/04/2019 00:00 - 30/04/2019 20:09
  • Last Updated - 31/08/2019 20:09
High Server Load (Resolved)
  • Priority - High
  • Affecting Server - PLESK MAGENTO
  • Customers have reported high load across two servers in our Auckland datacenter. This is affecting mail and web services for the following servers:

    • plskmage.createhosting.co.nz
    • plskwp.createhosting.co.nz

    This has been escalated to the datacenter staff and is being investigated. It is possible that a SSD drive is failing.

    Update 11/03/2019 09:30am: One of the SSDs is showing higher latency than the other in the same RAID pair. The drive however reports healthy. Dropping it from the array has improved performance. Testing will be performed on the drive, then it will be reseated before trying to re-add to the array. Performance may be degraded while undergoing maintenance.

    Update 11/03/2019 16:22pm: The SSD has been tested, reseated and added back into the RAID array. Performance has been restored.

     

  • Date - 11/03/2019 06:30 - 11/03/2019 17:43
  • Last Updated - 12/03/2019 11:43
Outage (Resolved)
  • Priority - Critical
  • Affecting System - NZ Data Center
  • Our monitors have picked up an high packet loss at our Auckland datacenter.

    This has been escalated to the datacenter staff.

    Update Fri, 26 Oct 2018 16:25 PM UTC: The datacenter network team have identified the issue and are working with their upstream provider to resolve the issue.
    Update Fri, 26 Oct 2018 18:05 PM UTC: We are still working on this issue.
    Update Fri, 26 Oct 2018 18:49 PM UTC: The datacenter team are working to resolve a second issue. They have multiple engineers on-site. There is no ETA at this time.
    Update Fri, 26 Oct 2018 19:21 PM UTC: The packet loss issue has been resolved. We are continuing to monitor the situation closely.
    Update Fri, 26 Oct 2018 21:44 PM UTC: There are some concerns about the stability of our upstreams network, and that we may have packet loss issues again unless we take urgent action. We are moving forward a planned migration to our upstreams new core network. This will happen at 11:30AM NZST. There is an expected 1-5 minute outage at this happens. Once complete we do not expect any further issues.
    Update Fri, 26 Oct 2018 22:46 PM UTC: The migration to the new core was completed as planned. There were a few subnets with routing issues that have now been resolved. We will be closely monitoring the network for further issues.
    Update Mon, 29 Oct 2018 00:13 AM UTC: The network has remained stable since the migration. Please open a ticket if you are seeing any issues at all.

  • Date - 27/10/2018 05:02 - 30/10/2018 07:57
  • Last Updated - 11/03/2019 07:57
Outage (Resolved)
  • Priority - Critical
  • Affecting System - NZ Data Center
  • At 5:58AM NZST we saw machines at the Auckland datacenter become unreachable.

    At around 6:25AM NZST we saw these devices come back online.

    The datacenter has informed us that this was caused by a power issue, causing some of their network devices to go offline.

    An alert from the fire suppression system caused the three separate supplies servicing those network devices to go offline. Datacenter technicians believe this was due to either a PDU or power spike. The PDUs were replaced and service was restored.

  • Date - 24/10/2018 06:00 - 24/10/2018 06:25
  • Last Updated - 24/10/2018 12:11
High packet loss in Auckland (Resolved)
  • Priority - High
  • Affecting Other - NZ Data Center
  • We are currently experiencing high packet loss in Auckland. We are investigating.
    Update @ 0300 NZDT: Datacenter staff are still investigating this issue.
    Update @ 0320 NZDT: The issue has been resolved.
    Update @ 1030 NZDT: The outage was due to a fault on an upstream router. We are seeking further information from the data center.
    Update @ 1400 NZDT: We have been advised the failure lay with a core device, on which a line card, route processor and backplane all apparently failed at the same time. The affected device is being repaired under emergency maintenance to restore redundancy to the core network before the weekend. No customer impact from the work is expected, aside from some increased latency while traffic flows reconverge. The full NOC team as well as contract engineers are on site to deal with any issues as they arise.

  • Date - 19/10/2018 02:30 - 19/10/2018 03:30
  • Last Updated - 24/10/2018 12:11
Host server XEN software stack update (Resolved)
  • Priority - High
  • Affecting System - NZ Data Center
  • Summary: We will be updating our host servers (requires a restart) to apply Xen security hardening, performance improvements and bug fixes.

    Who is affected: Customers running VMs on our Xen hosting stack including VMs on shared hosts (WordPress and Magento Lite customers) and VMs on dedicated hosts.

    Detail: Our host server software stacks will be updated to the latest version of our Xen virtualization software, the most recent Intel microcode patches to address Intel CPU security issues, and a more recent Linux kernel for the host. To do this we will be stopping the VMs on host servers, updating the software and restarting the servers. VMs will then start up as normal.

    Note that we will not be making any changes to the VM file systems.

    The software update process typically takes 20-60 minutes per server if all goes well. 45minutes is pretty typical. Host servers with lots of VMs take towards the longer end of that range. We reserve a 2 hour window for the upgrades.

    Timing: 6:30AM - 8:30AM NZDT 10 October 2018

    The updates will start within 0-2 hours of the stated start time.

  • Date - 10/10/2018 06:30 - 10/10/2018 07:49
  • Last Updated - 19/10/2018 09:50
Connectivity Issues (Resolved)
  • Priority - High
  • Affecting System - NZ Datacenter
  • Wed, 29 Aug 2018 00:08 AM UTC: All access should be restored at this point. A report is pending.

    Tue, 28 Aug 2018 22:52 PM UTC: We understand there are a small number of peering routes that may still be affected, this is being investigated.

    Tue, 28 Aug 2018 22:19 PM UTC: Traffic appears to be back to normal. Our upstream confirmed there was an issue, which they beleive has been resolved. They are still analysing the event..

    We are seeing some connectivity issues to our Auckland location. Investigating.

  • Date - 29/08/2018 10:20 - 29/08/2018 10:30
  • Last Updated - 10/10/2018 07:58
XEN config change and server reboot (Resolved)
  • Priority - Low
  • Affecting Server - PLESK MAGENTO
  • 13/02/2018 14:00 NZDT.

    Engineers will be making a critical config change to the XEN stack for security, performance and stability. A series of reboots will be required as part of this work. We apologise for any disruption.

    Expected downtime 5 - 30 min.

  • Date - 13/02/2018 14:06 - 13/02/2018 14:15
  • Last Updated - 14/02/2018 09:19
Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - PLESK MAGENTO
  • A server reboot is scheduled for 12/02/2018 NZST at 19:30. Expected downtime 5 min.
    Update: 19:40 - Server did not come back up. Investigating.
    Update: 20:05 - Server online. Maintenance will be scheduled for tomorrow (13/02/2018) to investigate cause and mitigate.

  • Date - 12/02/2018 19:29 - 12/02/2018 20:05
  • Last Updated - 13/02/2018 14:05
Server Maintenance (Resolved)
  • Priority - Critical
  • Affecting Server - PLESK MAGENTO
  • 03/02/2018

    We will be performing an update to plskmage.createhosting.co.nz, starting at 0300UTC today (16:00 NZST). The server will be rebooted a number times as part of the updates. We expect the updates to take 10-15 minutes to complete and test.

    Update 16:25 NZST: the server is currently not booting into the new kernel and we are working to resolve the issue as soon as possible
    Update 16:40 NZST: the server is now operating normally

  • Date - 03/02/2018 15:00 - 03/02/2018 16:40
  • Last Updated - 03/02/2018 17:24
Critical Xen software updates and reboot (Resolved)
  • Priority - Critical
  • Affecting System - NZ XEN HOST SERVERS
  • Issue

    There is an issue affecting all Intel CPUs, referred to in the media as “Spectre” & “Meltdown” or “Kernel Side-Channel Attacks”.  These articles provides a good overview:

    For customers on bare metal dedicated servers, we will be installing a kernel with the page table isolation patches.

    For customers on virtual machines we are awaiting patches for the virtualization software we use per https://xenbits.xen.org/xsa/advisory-254.html. For Managed VPS customers, we have implemented the 4.14 kernel with the page table isolation patches.

    The situation is changing on a day-to-day basis.  It is a critical issue.  And there are various patches and security updates being developed to address the problem.  We are closely monitoring the situation and will be doing all we can to ensure the security of our customers.

    Update Schedule

    10/01/2018 - 11/01/2018


    We will performing updates to all servers on 10/01/2018 - 11/01/2018. These will be scheduled mostly for the morning, starting at 6:45AM NZST.

    The software will be updated on the host server. We will stop the VPS's running on that host server, restart the server so that gets the new kernel, then start up your VPS. This typically takes 20-60 minutes.

    12/01/2018

    Starting at 0030UTC today (13:30 NZST) we will shutdown plskmage.createhosting.co.nz for a short time to make some changes. It will then boot up with a newer kernel. This server runs CloudLinux which is in part incompatible with the newer Xen stack (that was introduced due to the Meltdown/Spectre vulnerabilities) without some changes. After the change, the server will be rebooted using a shim that will allow it to boot the latest CloudLinux kernel but using hardware virtualization.

    13/01/2018

    We will be performing a series of incremental updates to plskmage.createhosting.co.nz, starting at 0030UTC today (13:30 NZST). The server will be rebooted a number times as part of the updates. We expect the updates to take 2-5 hours to complete and test.

    14/01/2018

    We will be performing an update to plskmage.createhosting.co.nz, starting at 2300UTC today (12:00 NZST). The server will be rebooted a number times as part of the updates. We expect the updates to take 10-15 minutes to complete and test.

    03/02/2018

    We will be performing an update to plskmage.createhosting.co.nz, starting at 0300UTC today (16:00 NZST). The server will be rebooted a number times as part of the updates. We expect the updates to take 10-15 minutes to complete and test.



    Please contact us if you require more information (support@createhosting.co.nz).

  • Date - 09/01/2018 12:42 - 18/07/2018 18:48
  • Last Updated - 03/02/2018 16:01
Server Updates (Resolved)
  • Priority - Low
  • Affecting System - All Servers
  • 27/01/2018 NZST: We will be performing a number of critical security updates to all servers, which will require server reboots. Expected downtime 2-5min.

  • Date - 27/01/2018 11:00 - 28/01/2018 00:00
  • Last Updated - 03/02/2018 16:00
Host server XEN software stack update (Resolved)
  • Priority - Low
  • Affecting System - NZ XEN HOST SERVERS
  • A new Xen Security Advisory (XSA) has been posted that requires us to perform updates to some of our Xen host servers. To do this we need to stop the VMs running on affected hosts, update the host servers and restart the host servers and their VMs.

    These security advisories impact only some customers running VMs on some of our Xen-based host servers. They do not impact any customers with 'bare metal' dedicated servers. Your VM will be unavailable during the update. We expect the updates to take 45 minutes (typically) to 2 hours (less commonly) per host.

    The XSAs will be publicly released by the Xen project team on Tue May 2 UTC. We must complete this maintenance before then.

    host3.createhosting.co.nz - 2017-05-02 06:30:00 [Completed]

    host4.createhosting.co.nz - 2017-05-02 06:30:00 [Completed]

    host5.createhosting.co.nz - 2017-05-02 06:30:00 [Completed]

    host6.createhosting.co.nz - 2017-05-01 06:30:00 [Completed]

    host7.createhosting.co.nz - 2017-05-01 06:30:00 [Completed]

    host8.createhosting.co.nz - 2017-05-01 06:30:00 [Completed]

    host9.createhosting.co.nz - 2017-05-01 06:30:00 [Completed]

  • Date - 01/05/2017 00:00 - 02/05/2017 08:00
  • Last Updated - 10/01/2018 12:42
Maintenance (Resolved)
  • Priority - Medium
  • Affecting Server - PLESK WP
  • Sat 09/09/2017 11.00AM-2.00PM

    We will be upgrading plskwp.createhosting.co.nz to Plesk Onyx as part of a series of updates that we are rolling out to all servers. The server will be rebooted a number of times as part of the upgrade process and services will be unavailable for a brief period (5 -10min) during those times. Due to the amount of changes, some disruption to services such as mail and web is to be expected.

  • Date - 09/09/2017 11:00 - 09/09/2017 11:30
  • Last Updated - 10/01/2018 12:41
Packet Loss (Resolved)
  • Priority - Low
  • Affecting System - NZ Data Center
  • Mon, 21 Aug 2017 21:55 PM UTC: We are working with our Auckland provider to add additional redundancy for Australian/Sydney traffic.
    Mon, 21 Aug 2017 08:20 AM UTC: Network is stable
    Mon, 21 Aug 2017 05:55 AM UTC: There was a further network blip for Auckland when the affected router was restarted.
    Mon, 21 Aug 2017 05:25 AM UTC: A line card was replaced upstream,
    Mon, 21 Aug 2017 05:03 AM UTC: Sydney traffic should be back to normal, the offending source has been isolated, traffic is not on alternate routes.

    We are aware of an incident affecting international traffic, and showing up as packet loss or 100% connectivity loss for a small number of sources/customers. Apparently an upstream router (possibly in Australia) is causing the issue. We are working with our upstream providers to isolate the issue and restore normal operation for all clients.

  • Date - 21/08/2017 17:01 - 01/09/2017 14:15
  • Last Updated - 01/09/2017 14:15
Host server XEN software stack update (Resolved)
  • Priority - Medium
  • Affecting System - NZ XEN HOST SERVERS
  • A new Xen Security Advisory (XSA) has been posted that requires us to perform updates to some of our Xen host servers. To do this we need to stop the VMs running on affected hosts, update the host servers and restart the host servers and their VMs.

    These security advisories impact only some customers running VMs on some of our Xen-based host servers. They do not impact any customers with 'bare metal' dedicated servers. Your VM will be unavailable during the update. We expect the updates to take 45 minutes (typically) to 2 hours (less commonly) per host.

    host3.createhosting.co.nz - 2017-03-28 06:30:00 [Completed]

    host4.createhosting.co.nz - 2017-03-29 06:30:00 [Completed]

    host5.createhosting.co.nz - 2017-03-30 06:30:00 [Completed]

    host6.createhosting.co.nz - 2017-03-31 06:30:00 [Completed]

    host7.createhosting.co.nz - 2017-04-1 06:30:00 [Completed]

    host8.createhosting.co.nz - 2017-04-0206:30:00 [Completed]

    host9.createhosting.co.nz - 2017-04-03 06:30:00 [Completed]

  • Date - 28/03/2017 06:30 - 03/04/2017 07:00
  • Last Updated - 02/05/2017 13:37
Scheduled power maintenance (Resolved)
  • Priority - Low
  • Affecting System - Auckland, NZ DC
  • This notice is a heads up of low impact work that has been planned as part of regular power maintenance at the Auckland datacenter.

    Event 1: UPS Maintenance & Generator Test

    Performing UPS maintenance & a battery replacement on UPS B followed by a generator test, no impact is expected during this time.
    Maintenance Window Start : Wed 18th Jan 2017 10:00AM NZDT
    Maintenance Window End : Wed 18th Jan 2017 05:00PM NZDT
    Duration: 7 Hours

    Event 2: Vector Power Maintenance
    Vector (power supplier) will be performing routine scheduled testing of their equipment, during this time HDDC will remove itself from the grid and be running on generator, no impact is expected during this time.
    Maintenance Window Start : Sat 21 Jan 2017 08:45AM NZDT
    Maintenance Window End : Sat 21 Jan 2017 1:00PM NZDT
    Duration: 4h25m

  • Date - 18/01/2017 10:00 - 21/01/2017 13:00
  • Last Updated - 28/03/2017 11:25
Scheduled Maintenance (Resolved)
  • Priority - Low
  • Affecting Server - PLESK WP
  • We will be rebooting plskwp.createhosting.co.nz just after 17:00 on Jan 17th 2017. Expected downtime 5-10min.

  • Date - 17/01/2017 17:10 - 17/01/2017 17:20
  • Last Updated - 17/01/2017 17:23
Scheduled Maintenance (Resolved)
Packet Loss / Network Issues (Resolved)
  • Priority - High
  • Affecting System - NZ Data Centre
  • We are seeing packet loss to servers in Auckland. Investigating and will post updates here.

    Update: data center suspect a faulty line card. Swapping over to secondary connections.
    Update: issue seems resolved for now. Additional work may need to be scheduled for Friday night NZT or Sunday before closing this. It will take some investigation prior.

  • Date - 09/12/2016 17:40 - 09/12/2016 18:30
  • Last Updated - 17/01/2017 17:21
DNS / Network Issue (Resolved)
  • Priority - Critical
  • Affecting System - NZ Data Centre
  • We have had several reports oF DNS issues relating to Auckland servers. We are investigating with urgency. The issue has also been escalated to our upstream provider.

    Update: Additional testing indicated one of the upstream resolving name servers was having issues. After fixing that things gradually cleared. The issue has been resolved and we'll be taing steps to ensure that it is mitigated for future occurances.

  • Date - 07/12/2016 18:30 - 07/12/2016 19:30
  • Last Updated - 07/12/2016 21:23
High Load (Resolved)
  • Priority - Medium
  • Affecting Other - NZ Data Center - HOST3
  • A host server is experiencing unexpected high load due to disk activity. This will be affecting sites on the following servers:

    plskmage.createhosting.co.nz
    plskwp.createhosting.co.nz
    westmere.createhosting.co.nz
    vm13.createhosting.co.nz
    vm17.createhosting.co.nz
    vm19.createhosting.co.nz
    vm20.createhosting.co.nz

    This should be resolved within 10 min, as the process completes. We aplogise for the inconvenience.

    Update: resolved

  • Date - 29/11/2016 14:00 - 29/11/2016 14:31
  • Last Updated - 29/11/2016 14:31
Power Outage (Resolved)
  • Priority - Critical
  • Affecting Other - UK Data Center
  • We are seeing servers in London not pinging.

    Update at UTC 0922: We have contacted the data center.  They are aware of the issue and are working on the problem.  No details as of yet.

    Update at UTC 0934: The data center are reporting a power issue.  We are working to get further details.

    Update at UTC 0943: The data center reports there is one part of the data center with a power issue (not sure of details yet).  This is setting off the fire alarms.  At this time there appears to be no fire. The data center are working to provide further details.

    Update at UTC 1059: Most servers have come back online, some have not. We are chasing up the data center to get those back up and running ASAP. MySQL replication resync where applicable will be scheduled to minimise downtime.

    Update UTC 0345: The failed access switch has been replaced. All servers are responding.

    Please note: due to the recent earthquakes in NZ, calls to the Create Hosting emergency number may not get through. Please leave a message. 

  • Date - 16/11/2016 21:22 - 29/11/2016 14:12
  • Last Updated - 16/11/2016 11:25
Dirty Cow (Resolved)
  • Priority - Critical
  • Affecting Other - All Servers
  • There exists a bug in Linux kernels where users can 'escalate' their privileges. e.g. users with an regular user account can become root. e.g. exploitable webapps (like older versions of Wordpress or Magento). 

    For more information about the issue see: https://dirtycow.ninja/

    To resolve the issue, we will need to install an updated, patched kernel. We will systematically upgrade all servers which will require a restart of your server with the fixed 4.4.26 (or similar) kernel. Please expect a short downtime of a few minutes while this is performed.

  • Date - 27/10/2016 12:00 - 16/11/2016 01:04
  • Last Updated - 27/10/2016 13:45
Host server maintenance (requiring downtime) (Resolved)
  • Priority - High
  • Affecting System - NZ / UK Servers
  • We have received a critical Xen Security Advisory (XSA) that requires updates to to be performed to our Xen host servers. To do this we need to stop the VMs running on these hosts, update the host servers and restart the host servers and their VMs.

    These security advisories impact all customers running on shared or VPS's on our Xen-based host servers. They do not impact any customers with 'bare metal' dedicated servers.

    Your email and web services will be unavailable during the update. We expect the updates to take 45 minutes (typically) to 2 hours (less commonly) per host.


    Scheduling

    NZ Datacentre 1: 2016-09-03 07:00:00 - 08:59 NZT

    NZ Datacentre 2: 2016-09-04 07:00:00 - 08:59 NZT

    UK Datacentre 1&2: 2016-09-04 10:00:00 - 11:59 NZT

    Where possible we will attempt to schedule updates at times that are off peak at the host server's location. Due to the logistical demands of the update, the schedules may not be ideal to all customers. We don't have too much choice unfortunately.

    The XSAs will be publicly released by the Xen project team on Sep 8. We must complete this maintenance before then (before the vulnerabilities become public).

    We apologise for the inconvenience, and are committed to ensuring the utmost security. 

  • Date - 04/09/2016 07:00 - 27/10/2016 13:45
  • Last Updated - 01/09/2016 13:09
Host server maintenance (requiring downtime) (Resolved)
  • Priority - High
  • Affecting System - NZ / UK Servers
  • We have received a critical Xen Security Advisory (XSA) that requires updates to to be performed to our Xen host servers. To do this we need to stop the VMs running on these hosts, update the host servers and restart the host servers and their VMs.

    These security advisories impact all customers running on shared or VPS's on our Xen-based host servers. They do not impact any customers with 'bare metal' dedicated servers.

    Your email and web services will be unavailable during the update. We expect the updates to take 45 minutes (typically) to 2 hours (less commonly) per host.


    Scheduling

    NZ Datacentre: 2016-07-23 02:00:00 - 09:00 NZT (Completed)
    UK Datacentre: 2016-07-23 10:00:00 - 11:59 NZT

    Where possible we will attempt to schedule updates at times that are off peak at the host server's location. Due to the logistical demands of the update, the schedules may not be ideal to all customers. We don't have too much choice unfortunately.

    The XSAs will be publicly released by the Xen project team on July 26th. We must complete this maintenance before then (before the vulnerabilities become public).

    We apologise for the inconvenience, and are committed to ensuring the utmost security. 


  • Date - 23/07/2016 02:00 - 30/08/2016 21:25
  • Last Updated - 23/07/2016 09:09
PayPal Updates (Resolved)
  • Priority - Medium
  • Affecting System - Multiple Servers
  • We will be implementing updates for PayPal TLS1.2 requirements which will require server restarts. Downtime exptected to be 15-30min. This will be performed between 17:30 - 23:59 on 17/07/2016 NZST.

  • Date - 17/07/2016 17:33 - 23/07/2016 09:01
  • Last Updated - 17/07/2016 22:58
DDoS (Resolved)
  • Priority - Low
  • Affecting System - NZ Datacenter
  • Several VPS's failed to respond for a few minutes. The DC staff provided the following details:

    International network performance was degraded for a period of several minutes as a host on our network was subjected to an international DDoS attack.
    Unfortunately our automated systems did not automatically block this attack; this will be investigated and resolved.

    The connection has been fully restored now.

  • Date - 17/03/2016 20:45 - 17/03/2016 20:55
  • Last Updated - 17/03/2016 22:24
High Packet Loss (Resolved)
  • Priority - High
  • Affecting System - London DC
  • We are experiencing light packet loss at our London datacenter. Slow connections or even timeouts might be expected.
    We will update this ticket as soon as we receive details from our datacenter staff.

    Update: This seems to have resolved itself and we will continue to monitor closely.

  • Date - 07/01/2016 01:00 - 07/01/2016 02:12
  • Last Updated - 25/01/2016 21:13
Host server XEN software stack update (DEC 15 (Resolved)
  • Priority - High
  • Affecting System - NZ & UK Servers
  • Issue: There are some security issues affected Xen hosts (and thus the VPS's that run on those hosts) perhttp://xenbits.xen.org/xsa/

    The details are embargoed meaning that the details of the issue cannot be made public. Our upstream datacenter is on a 'predisclosure' list so we are privy to the details of those issues (and the fixes). However we are not permitted to disclose any details to our customers (while the issues remain embargoed).

    Impact: This affects VPS's hosted with us (including those on dedicated host servers). It excludes regular dedicated servers not running the Xen hypervizor.

    Course of action: We will be applying those fixes. To do that we will stop all VPS's on a Xen host, install the updated packages and restart the host server. VPS's will come up after the reboot. Typically it will take 20-60 minutes of downtime to complete this process.

    Timing: Because of the nature of the problem we will be doing restarts around the clock on affected hosts. We will attempt to do those, where possible, outside of business hours at the host server data center location. Updates are scheduled at the following times:

    NZ Servers: 15/12/2015 9PM NZST
    UK Servers: 17/12/2015 9PM GMT


  • Date - 15/12/2015 21:00 - 17/03/2016 22:25
  • Last Updated - 15/12/2015 19:35
Host server XEN software stack update (Resolved)
  • Priority - High
  • Affecting System - NZ & UK Servers
  • Summary:Over the period ofOctober 22nd - 24th, at 6PM nightly NZT (+0 to 2hrs), we will be updating our host servers (requires a VM restart) to address a Xen security issue before it becomes public on October 29.

    Customer action required: None

    Who is affected: All customers, all services - including Magento Optimised VPS's on our Xen hosting stack (NZ and UK).

    Detail: The Xen project report an XSA-148 security issue. Details of the issue and fix are available only to the Xen security pre-disclosure list. Details of the advisory and any fix are embargoed until October 29 to anyone (including our customers) not on that list. We are unable to provide further details about the nature or severity of the issue until after that date (when the issue becomes public).

    Our host server software stacks will be updated to address this issue. To do this we will be stopping the VMs on host servers, updating the software and restarting the servers. VMs will then start up as normal.

    Note that we will not be making any changes to the VM file systems (the issue is with the hypervizor software, not the VM).

    The software update process typically takes 20-60 minutes per server if all goes well. 45minutes is pretty typical. Hosts with lots of VMs take towards the longer end of that range. We reserve a 2 hour window for the upgrades.



  • Date - 22/10/2015 17:22 - 24/10/2015 19:29
  • Last Updated - 15/12/2015 19:30
Host Server Maintenance (Resolved)
  • Priority - Critical
  • Affecting System - HOST3
  • 12 Oct 2015 22:15: We will be performing an LVM snapshot across all VM's on this HOST server. While no downtime is expected, there is a risk of increased IO which may briefly impact performance. Expected duration 20min.

  • Date - 12/10/2015 22:15 - 22/10/2015 17:30
  • Last Updated - 12/10/2015 22:14
Server Restart (Resolved)
  • Priority - Low
  • Affecting Server - PLESK MAGENTO
  • 29/09/2015 NZT: plskmage.createhosting.co.nz is scheduled for a reboot to complete an LVM disk resize. Estimated downtime 5 min.

  • Date - 29/09/2015 23:00 - 29/09/2015 23:09
  • Last Updated - 29/09/2015 23:02
DDoS (Resolved)
  • Priority - High
  • Affecting System - London Data Centre
  • We are seeing multiple VPS's down at the London datacenter. We are investigating.

    Update Wed, 23 Sep 2015 09:44 AM UTC: The datacenter network team reports they had a partial outage to a small part of their network. Servers are coming back online now.

    Update Thu, 24 Sep 2015 02:00 AM UTC: The datacenter team reports that the root cause was a DDoS against that part of the network. During the DDoS an affected router did not fail over properly, causing a small outage. The problem with the router fail over has been corrected and should not reoccur.

  • Date - 23/09/2015 20:06 - 24/09/2015 14:08
  • Last Updated - 24/09/2015 14:08
Unplanned Server Restart (plskmage.createhost (Resolved)
  • Priority - Low
  • Affecting System - NZ DATA CENTRE
  • 05/08/2015 14:45 NZST: We will be briefly restarting the following servers to detach resize and then reattach backup storage devices.

    plskmage.createhosting.co.nz
    plskwp.createhosting.co.nz

    Normally this does not require a restart, however in this case we hit an issue with the hotplug which requires us to perform a manual reboot.

    Downtime is expected to be around 2-3 min per restart.

  • Date - 05/08/2015 14:45 - 24/09/2015 14:10
  • Last Updated - 05/08/2015 14:46
Server Move - NZ DC (Resolved)
  • Priority - Critical
  • Affecting Server - PLESK WP
  • 01/08/2015 (20:00 - 22:00 NZT) - We will be moving all sites on plskwp.createhosting.co.nz to our new platform based on SSD. Expected downtime is 1h15min.

  • Date - 01/08/2015 20:10 - 05/08/2015 14:40
  • Last Updated - 01/08/2015 20:14
Auckland network maintenance (informational) (Resolved)
  • Priority - Low
  • Affecting Other - NZ Data Centre
  • Monday, 3 August 2015 10:00 p.m - Tuesday 4am NZT

    Our network provider is making some configuration changes to network feeds that may be visible via a few short periods of packet loss to some services. Customer-facing impact should be limited to one or two traffic disruptions for RSTP reconvergence, lasting only a few seconds each.

    Tuesday, 11 August 2015 1:00 a.m. - 3am NZT:

    Work is being carried out physically near our core network feed to move some large network gear. No service impact is expected as the work does not impact any device actually carrying our traffic. However it should still be considered as an at-risk window for the duration of the work.

    ------------------------------------

    Monday, 27 July 2015 10:00 p.m - Tuesday 4am NZT

    As part of ongoing efforts to improve network reliability, our network provider is making some hardware changes to its network. This stage will involve the repositioning of selected physical links on the core network. Due to our redundant network design, we do not expect that any of our services will be affected, however the network should be considered at risk during this window.

    Update: This maintenance was completed without mayor issues.

  • Date - 27/07/2015 22:00 - 24/09/2015 14:10
  • Last Updated - 28/07/2015 12:39
Multiple servers unreachable in NZ data cente (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • Mon, 13 Jul 2015 20:39 PM UTC: The issue has been resolved. This affected all servers at the Auckland datacenter and would have caused loss of connectivity via International and Domestic routes. No changes to individual servers was required to restore connectivity.

    We have the following report from the datacenter, there was some planned maintenance that should not have caused any issues:

    We have been preparing the final stages of upgrades to our core network, work to complete the second part of the migration to the new Cisco platform was completed successfully last Monday night. At 19:27 as we were preparing for the final stage of the migration due to kick off at 22:00, our network engineers accidentally pushed a configuration change that adversely affected two of the old core devices that were scheduled to be retired tonight.

    This inadvertently resulted in some Datacentre Customers IP transit services being unavailable.

    Remedial work and post Incident Action Items:
    Further fault diagnosis is being undertaken with the engineers responsible.

    We are confident that the upgrades we are performing to our network going forward will enhance overall resiliency and redundancy.

    We are disappointed that this has happened as our core network uptime over the past 12 months has been excellent, this has affected our high standards of service delivery to you our customer.

    Update Mon, 13 Jul 2015 08:36 AM UTC: Servers are responding again, we are awaiting details about the issue

    Update Mon, 13 Jul 2015 08:13 AM UTC: Problem has been detected at the edge network core systems, engineers are working on the issue, there is no ETA as yet.

    Update: Datacenter engineers picked up on the issue and are working on it as we speak.

    We noticed some servers were unreachable in NZ data center. We're investigating.

  • Date - 13/07/2015 19:40 - 13/07/2015 20:39
  • Last Updated - 16/07/2015 20:04
Packet Loss (Resolved)
  • Priority - Low
  • Affecting System - Auckland datacentre
  • Update: Datacenter reported a problem with one of the upstream providers, it was solved quickly and network returned to normal.

    Thu, 16 Jul 2015 02:02 AM UTC:We are experiencing some connectivity issues at Auckland, we are investigating

  • Date - 16/07/2015 14:02 - 16/07/2015 14:12
  • Last Updated - 16/07/2015 20:03
Network Connectivity Issue (Resolved)
  • Priority - High
  • Affecting System - London Data Centre
  • Sat, 20 Jun 2015 13:44 UTC:We have picked up network connectivity issues to some of our data centres. We are currently investigating and will update as soon as we have more info.

    Sat, 20 Jun 201514:29 UTC: We have detected a DOS attack affecting some servers within our networks. Engineers in various data centres are working to mitigate that.

    Sat, 20 Jun 201514:40 UTC: Affected IP/s have been null routed to resolve issues. Our London network seems to also be recovering as well.

    Sat, 20 Jun 201522:37 UTC: The null route was lifted, however the attack resumed, causing a small downtime. We are investigating how to mitigate the issue. Network should be up again, null route has been reinstated.

    Sun 21 Jun 2015 00:15 UTC: the attack seems to be directed to ns1.zonomi.com, it has been null routed, ns3.zonomi.com has been null routed too for the time being by the data center, but we are working to see if it is possible to remove that null route. We are investigating alternatives to try to debug and mitigate the issue.

    Sun 21 Jun 201505:04 UTC: ns1 and ns3 continue to be null routed, we continue to investigate the issue.

    Sun 21 Jun 201505:50 UTC: ns1 back in service. Attack has abated.

    Status: Zonomi is the target of a denial of service attack. We are working to mitigate the issue. Only some of its name servers are affected.

  • Date - 14/07/2015 01:49 - 13/07/2015 20:47
  • Last Updated - 13/07/2015 20:47
Packet Loss (NZ) (Resolved)
  • Priority - Medium
  • Affecting System - Auckland Data Centre
  • Mon, 8 Jun 2015 10:21 AM NZST: We have been advised that there is an ongoing ddos attack affecting international transit levels. Additional capacity has been enabled to restore service, while engineers work on mitigating the attack. Connectivity appears to be normal again at this stage.

    We are currently investigating an issue where packet loss is occuring on some international routes to Auckland servers. National traffic is not affected as far as we know.

  • Date - 08/06/2015 10:26 - 08/06/2015 10:40
  • Last Updated - 21/06/2015 02:03
Auckland core network planned maintenance (Resolved)
  • Priority - Low
  • Affecting System - Auckland Data Centre
  • In order to mitigate an issue which has been identified in edge switching equipment, the Auckland datacenter network team will be migrating ports and Layer 3 interfaces associated with some services during the above maintenance window. This should cause little to no impact to end customers.

    Maintenance Window: 2015-06-03 02:00-05:00 NZST

  • Date - 03/06/2015 02:00 - 08/06/2015 10:26
  • Last Updated - 02/06/2015 10:27
Auckland core network planned maintenance (Resolved)
  • Priority - Low
  • Affecting System - NZ Data Centre
  • We have been advised of work by our network upstream to update security capabilities to their network core:

    "As part of our continuous improvement programme, we have a requirement to deploy a service pack upgrade to our aggregation network devices as instructed by our vendor. This requires a reload and as such there will be a small outage to your services* while the nodes are reloaded."

    The following schedule has been advised for each task.

    Change 1: 3 May 2015 23:00 NZST - 3 May 2015 05:00 NZST
    Change 2: 3 May 2015 23:00 NZST - 4 May 2015 05:00 NZST

  • Date - 03/05/2015 12:10 - 02/06/2015 10:25
  • Last Updated - 27/04/2015 12:41
Xen Host Updates (Resolved)
  • Priority - Medium
  • Affecting System - All NZ & UK VPS's
  • Issue: There are some security issues affected Xen hosts (and thus the VMs that run on those hosts) perhttp://xenbits.xen.org/xsa/.

    The details are embargoed meaning that the details of the issue cannot be made public. We are on a 'predisclosure' list so we are privy to the details of those issues (and the fixes). However we are not permitted to disclose any details to our customers (while the issues remain embargoed).

    Impact: This affects VPS's hosted with us. It excludes regular dedicated servers not running the Xen hypervizor (bare metal).

    Course of action: We will be applying those fixes. To do that we will stop all VPS's on a Xen host, install the updated packages and restart the host server. VPS's will come up after the reboot. Typically it will take 20-60 minutes of downtime to complete this process.

    Timing: Because of the nature of the problem we will be doing restarts around the clock on affected hosts. We will attempt to do those, where possible, outside of business hours at the host server data center location.

    We will be updating this notice with more information as our planning and the rollout of the fixes progresses.


    Update @UTC Mar 4 1320: We are scheduling host servers for the upgrade.We will email you when your VPS is added to the schedule. Please bear with us as we attempt to get these hosts patched as quickly as possible.

  • Date - 09/03/2015 12:43 - 17/03/2015 09:31
  • Last Updated - 09/03/2015 21:33
MySQL Restart (Resolved)
  • Priority - Medium
  • Affecting Server - PLESK WP
  • 23/2/2015 14:35: The server will need to be rebooted in order to mount and dismount a backup image. We anticipage a small downtime window of 5 minutes during this time.

  • Date - 23/02/2015 14:37 - 09/03/2015 12:43
  • Last Updated - 23/02/2015 14:42
Critical Security Patches (Resolved)
  • Priority - Critical
  • Affecting System - All NZ and UK servers
  • We will be systematically updating and rebooting all NZ and UK VPS and dedicated servers over the next few hours to implement citical security updates. This relates to the  CVE-2015-0235 glibc gethostname 'ghost' vulnerability. Anticipated downtime is 5-10min, depending on the time to reboot. Please notify us support@createhosting.co.nz if you would like to schedule your VPS reboot for after hours.

    Update 04:00 30/01/2015: Maintenance completed.

  • Date - 29/01/2015 18:50 - 30/01/2015 04:00
  • Last Updated - 31/01/2015 13:13
Packet Loss UK Servers (London) (Resolved)
  • Priority - Medium
  • Affecting System - London DC
  • We're noticing some packet loss in our London data center. Currently affecting HOST2 and HOST3. Investigating the issue. 

    Update: 20:43 NZT / 07:43UTC - There was a DOS (denial of service) issue with the network that is now be resolved.

  • Date - 16/01/2015 18:47 - 16/01/2015 20:43
  • Last Updated - 17/01/2015 00:49
Packet Loss NZ Servers (Resolved)
  • Priority - High
  • Affecting System - Auckland Data Centre
  • We're experiencing some high packet loss on multiple servers in our NZ data center. We're investigating now and will update this notice with further information as we receive it.
    Update: 0855UTC - Network is back to normal. Awaiting root cause analysis from DC.

  • Date - 09/01/2015 22:01 - 10/01/2015 09:31
  • Last Updated - 10/01/2015 09:33
SSD Issue (VPS HOST SERVER - HOST3) (Resolved)
  • Priority - Critical
  • Affecting System - vm9,vm11,vm13,vm13-w1,vm13-w2,vm13-d1,vm16,vm17,vm18
  • Reported: VPS's on the host3.createhosting.co.nz host server are reporting high I/O. Investigating.
    Update: 10:00AM NZST - A failing SSD has been hot swapped out and the new drive is now syncing with the raid array
    Update: 11:30AM NZST - We are still getting reports of fluctuating I/O on VPS's for this host. 
    Update: 13:00PM NZST - We are looking at options for changing all SSD's, replacing the existing enterprise Samsung SSD's which appear to be causing issues.
    Update: 13:30PM NZST - We will be replacing the existing enterprise Samsung SSD's with enterprise Intel SSD's. A maintenance window and timeframe will be advised as soon as we have futher information.
    Update: 15:00PM NZST - The first new Intel SSD is installed and the raid array is syncing. I/O issues should now be resolved, but performance will be slightly degraded until the sync has completed. We expect that to take 24-48hours.
    Update: 24/12/2014 NZST - The last SSD is now installed and fully synced.

  • Date - 23/12/2014 07:20 - 25/12/2014 00:00
  • Last Updated - 30/12/2014 11:31
Evolomail.com (Resolved)
  • Priority - High
  • Affecting System - Evolo Outgoing Mail Server
  • Reported: Issues logging in to evolomail.com - redirect loop. Mail sending OK.
    Update: Issue is now resolved.

  • Date - 23/12/2014 11:44 - 23/12/2014 11:45
  • Last Updated - 23/12/2014 12:57
Network Maintenance (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Update Sat, 27 Sep 2014 18:05 PM UTC: Issue now resolved.
    Update Sat, 27 Sep 2014 17:55 PM UTC: One IP range (103.6.213.0/24) seems to be having issues still.  We have contacted our datacenter to get that resolved.
    Update Sat, 27 Sep 2014 17:46 PM UTC: All servers are back up, if you are having any issues please let us know.
    Update Sat, 27 Sep 2014 17:25 PM UTC: We are seeing a large number of hosts down after this network maintenance, we are investigating this issue.

    The Auckland data center is making some changes to their core network setup at the Albany data center this Saturday the 27th September from 10PM NZST till 5AM NZST 28th September.

    They have been planning and implementing these changes for past few weeks.  The changes will require a no more than a few 2 to 15 minute outages between 10PM and 5AM.

  • Date - 27/09/2014 22:00
  • Last Updated - 03/10/2014 11:23
SSD replacement (Resolved)
  • Priority - Low
  • host-uk.createhosting.com has a failing SSD drive, which will need to be replaced. This particular server does not feature hot swap, so we will require some downtime for this maintenance.

    This affects the following VPS's

    V1-V6

    This has been scheduled for Sat, 27 Sep 2014 01:00 AM UTC. Staff in London data center will take the server offline for an estimated 20-40 minutes. We will update this notice with any news we have.

  • Date - 27/09/2014 13:00 - 03/10/2014 11:23
  • Last Updated - 24/09/2014 11:32
At Risk Declaration - London Datacenter (Resolved)
  • Priority - Low
  • Our London datacenter will be performing scheduled service on the UPS units that back the A power feed to the datacenter. This is a standard service that is carried out to verify that the UPS is in good operating shape.

    "Works are not expected to effect service; however whilst one of the units is under maintenance there will be reduced power redundancy and an At Risk declaration. Customers will remain on UPS protected supplies with generator backup throughout the maintenance periods protecting against interruption to the mains supply.

    Power to the A feed is not expected to be interrupted at any time as the building infrastructure allows us to transparently switch the input source for the A feed to the B side, throughout the A side maintenance."

    This has been scheduled for Tuesday 23rd September, commencing at 2200Hrs and ending by 0500Hrs on Wednesday 24th September 2014. (GMT)

  • Date - 24/09/2014 10:00 - 03/10/2014 11:23
  • Last Updated - 24/09/2014 11:32
Network issue (Resolved)
  • Priority - Medium
  • Affecting Other - NZ Data center
  • We are experiencing networking issue at the NZ Data center. We are investigating and will update this notice when we have more information.

    Update: Mon, 26 May 2014 12:23 PM UTC: We are currently seeing severe packet loss for international connections to NZ. We are working with our datacenter to diagnose the source of that loss.

    Update: Mon, 26 May 2014 12:42 PM UTC: We identified a server that was flooding our international connectivity. That server has been removed from the network pending further investigation. Normal network service should be restored now.

     

     

     

  • Date - 26/05/2014 23:57 - 24/09/2014 10:55
  • Last Updated - 27/05/2014 00:53
Packet Loss / Slow Connection (Resolved)
  • Priority - Medium
  • Affecting Other - AUCKLAND NZ DC
  • We are currently seeing some network congestion on our upstream connection. This is causing some users to experience slower than normal connections, and in some cases a bit of packet loss.

    We are working with our upstream provider to resolve this asap, and restore normal service.

    Update: Tue, 13 May 2014 05:17 AM UTC: Our upstream has confirmed there is some congestion, particularly for national (NZ) users. They have added extra capacity to reduce the impact of that traffic, which appears to have resolved the issue with our current traffic levels.

  • Date - 13/05/2014 14:46
  • Last Updated - 14/05/2014 10:43
Heartbleed Bug CVE-2014-0160 (Resolved)
  • Priority - Critical
  • Affecting Other - Managed VPS (Magento Optimised) + Managed VPS (w/ Plesk Option)
  • The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library CVE-2014-0160 (Common Vulnerabilities and Exposures) is the official reference to this bug. It can result in private keys (e.g. ssl keys) being exposed.

    Further reading

    http://heartbleed.com/
    http://arstechnica.com/security/2014/04/critical-crypto-bug-in-openssl-opens-two-thirds-of-the-web-to-eavesdropping/
    https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0160

    Distro specific information

    Debian: https://security-tracker.debian.org/tracker/CVE-2014-0160 (versions prior to Debian 7 Wheezy are unaffected)
    Centos: http://lists.centos.org/pipermail/centos-announce/2014-April/020248.html (all version prior to 6.5 are unaffected)
    Ubuntu: http://askubuntu.com/questions/444702/how-to-patch-cve-2014-0160-in-openssl (all versions prior to 12.04 are unaffected)

    What versions of the OpenSSL are affected?

    Status of different versions:
    • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
    • OpenSSL 1.0.1g is NOT vulnerable (this is the bug fix released 7th of April 2014)
    • OpenSSL 1.0.0 branch is NOT vulnerable
    • OpenSSL 0.9.8 branch is NOT vulnerable

    Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. Patch OpenSSL 1.0.1g was released on 7th of April 2014 which fixes the bug. This means the bug has been out in the wild for over 2 years, but is only now becoming widely known and all VPS servers need to be tested.

    Version checking

    Online checking site here: http://filippo.io/Heartbleed 

    To see which openssl version you are using, run the command:
    openssl version

    Check the version for being an old (unaffected) version. Or having an April 2014 build date (even if the version does not match a version of a fully fixed version, sometimes the vendor will just backport a specific patch).

    For a completely accurate test use the command line tool here
    https://github.com/FiloSottile/Heartbleed

    Scheduled Maintenance

    OpenSSL will be patched for all affected servers. All services linked against the older (actually and ironically, the "newer" vulnerable version of OpenSSL) will need to be restarted to apply the newly patched OpenSSL version. These services include Apache, PHP-FPM and mail services (imap4-ssl, pop3-ssl, smtp, smpt2). There will be a very brief service interruption as these services are restarted. We will perform a clean shutdown and VPS restart to ensure that all services are restarted.

    Update: 09/04/2014 13:00 (NZST)

    All Managed VPS servers have been checked and patched if affected. For all SSL certificates purchased through Create Hosting Ltd, we will be automatically be re-keying those SSL's and reinstalling on the VPS. All that you will need to do is approve the SSL issuance request (once you receive the email from GeoTrust). Due to the large amount of SSL's to re-issue, please be advised that it may be a few days before the certificate is re-keyed and installed.

  • Date - 09/04/2014 09:00 - 13/05/2014 16:55
  • Last Updated - 10/04/2014 11:24
HOST3 Unreachable (Resolved)
  • Priority - High
  • Affecting System - Varnish / Cluster
  • We are currently experiencing an issue with our NZ based HOST3 server. This is affecting all Varnish based servers and clusters:

    vm11.createhosting.co.nz
    vm13.createhosting.co.nz
    vm13-d1.createhosting.co.nz
    vm13-w1.createhosting.co.nz
    vm13-w2.createhosting.co.nz

    Technicians are working on this now and hope to have it resolved ASAP.

    Update: 18:15NZT We have identified a bug in XEN (http://xen.crc.id.au/bugs/view.php?id=25). We will perform a software upgrade on the host server to address this and restart each VPS. We anticipate this to take 30 minutes.

    Update: 18:52NZT All services back online

    Root cause analysis:

    It looks like the host server got hit by a bug that we have recently discovered in the host OS. This affects secondary storage on VPS's, and is described in detail on http://xen.crc.id.au/bugs/view.php?id=25

    Adding a new VPS to the host triggered this rare event due to the host server being under extra load - it is consistent with the issue. Engineers had already tested a fix of our hosting stack that resolves this. To install the fix meant stopping all applicable VPS's and rebooting the host.

    For stability purposes we made the call to go ahead and do that whilst the filesystem was already locked up and VPS's offline. The VPS's were cleanly shutdown and technicians started the upgrade process. There was a fair amount of work involved for that, so it did take a while to complete. The host is of course back online and all VPS's are operating as normal with the fix in place.

  • Date - 28/03/2014 17:46 - 28/03/2014 18:52
  • Last Updated - 31/03/2014 09:02
Outgoing Mail Queued (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • Technicians are currently investigating an outgoing mail issue that is preventing queued mail from being sent. Once we have cleared the issue, the mail will be automatically sent. We hope to have the issue sorted as soon as possible.

    Update: This issue has now been resolved and queued mail is now being delivered. This took a while to fix as all mailboxes had to be disabled in order to rebuild all configuration. We will investigate further to mitigate future occurances.

  • Date - 21/03/2014 12:34
  • Last Updated - 21/03/2014 14:11
Packet loss (Resolved)
  • Priority - High
  • Affecting System - Auckland Data Center (NZ)

  • Update:
     Thu, 9 Jan 2014 04:11AM UTC: Resolved

    Update: 
    Thu, 9 Jan 2014 04:01 AM UTC: Seeing this packet loss recur. Investigating.

    Update:
    Thu, 9 Jan 2014 01:12 AM UTC: We have been advised that the initial issue was caused by a BGP routing problem with one of the datacenters upstreams. The fix was to drop that upstream until the issue was resolved. The more recent packet loss was caused by trying to reinstate that to service, however the problem continued and that peer was dropped again.

    Update: Thu, 9 Jan 2014 00:29 AM UTC: Seeing further packet loss, particularly affecting international traffic, investigating.

    Update: Thu, 8 Jan 2014 22:00 AM UTC: Waiting on an incident report from the datacenter.

    The Auckland datacenter is experiencing some network issues (eg high packet loss, intermittent connections, etc). We are going to investigate and ask the datacenter staff to provide an detailed report.

  • Date - 08/01/2014 19:00 - 08/01/2014 19:00
  • Last Updated - 21/01/2014 12:16
Planned network maintenance (Resolved)
  • Priority - Low
  • Affecting System - Auckland Data Center (NZ)
  • On 19 Nov 2013 at 8-10pm NZDT we are planning an update to our core network which will remove an intermediate routing device and improve our main feed redundancy. That will involve a staggered move of our redundant core uplinks to different routers at the datacenter.

    The impact of the change should be minimal, you may see a couple of brief (<1min) periods of packet loss as routes fail across our redundant feeds and stabilize. The longer window is to allow for time to test after each uplink is moved.

    Benefits will include transparent upgrade capabilities and adding the final planned layer of physical network redundancy within the datacenter. We also anticipate seeing slightly less connection latency after the move.

    If you have any questions please get in touch with our support team. Note that due to the requirements of this work we are not able to reschedule.

    UPDATE: Tue, 19 Nov 2013 21:03 PM UTC: This work was successfully completed.

  • Date - 19/11/2013 20:00 - 28/03/2014 18:00
  • Last Updated - 20/11/2013 10:08
Connectivity Issue (Resolved)
  • Priority - Critical
  • Affecting System - London Data Center
  • At 1335GMT, London experienced a routing issue that caused an outage in our providers own aggregation core router that is hosted with the datacenter. Not all customers were affected by this outage, only the ones that are routed via the core router.

    Datacenter technicians suspected that the issue was with the core router, so as a preventative measure it was rebooted. Nothing appears to be wrong with that router however, and it is working perfectly fine.

    It was later discovered by datacenter engineeers that this issue was a routing glitch caused by the internal routing at the datacenter and upstream providers, and was not related to the core router.

    Update 1420GMT: The issue has now been solved. We apologise for the inconvenience caused and we are working with the datacenter to mitigate future issues like this.

  • Date - 14/11/2013 02:35 - 14/11/2013 03:20
  • Last Updated - 14/11/2013 10:44
Emergency Maintenance (Resolved)
  • Priority - Low
  • Affecting System - London Data Center(s)
  • Network techs will be conducting emergency maintenance on UK transit networks that will affect most if not all London based servers.

    Impact is expected to be minimal, but you may notice a few brief periods of stalling (packet loss) during this window.

    The work will occur between 05:00 - 07:30 GMT on 13/11/2013.

    Update @1345GMT: the connection to London seems to be fluctuating. We are investigating the issue now.
    Update: the connection issue is unrelated to the scheduled emergency maintenance - moved to a separate issue

     

  • Date - 13/11/2013 18:00 - 13/11/2013 19:00
  • Last Updated - 14/11/2013 10:35
Network Maintenance (Resolved)
  • Priority - High
  • Affecting System - VPS Host Server (host2.createhosting.co.nz)
  • As part of a network upgrade and expansion, the VPS Host Server for vm1.createhosting.co.nz - vm13.createhosting.co.nz will be reconfigured, shutdown and restarted. All Magento Optimised VPS's on this server will be systematically shutdown and restarted. 

    We have allocated a 1 hour window for this work, to cover the worst case scenario. We only anticipate this to take around 10 min however.

    Update: this has now been completed.

  • Date - 26/10/2013 14:00 - 26/10/2013 14:15
  • Last Updated - 26/10/2013 15:00
Packet loss over international connections (Resolved)
  • Priority - Medium
  • Affecting System - Auckland Data Center
  • We are noticing issues at the Auckland datacenter which is resulting in severe packet loss. We are looking into this now.

    Update: Fri, 25 Oct 2013 00:07 AM UTC: This appears to be resolved now, we are still researching the cause.

    Update: Fri, 25 Oct 2013 00:08 AM UTC: A DDoS targeted at a server on our network was found, and the affected server null routed to restore normal service.

     

  • Date - 25/10/2013 12:00 - 25/10/2013 13:05
  • Last Updated - 25/10/2013 13:41
Network Upgrade (Resolved)
  • Priority - Low
  • Affecting System - Auckland Data Center
  • As part of our ongoing work to make our network more reliable, several network ranges need to be migrated from an upstream router to a redundant pair of routers on our provider's own core.

    Benefits of this will include a more reliable response to device failure, and better network usage monitoring for those ranges.

    Visible impact of the configuration change is expected to be minimal, with only a few seconds of outage while arp caches update.

    We have scheduled a 1 hour window for this work, between 8pm-9pm NZ on Tuesday 15th October (7-8am UTC).

    Update @16 Oct 0151 UTC: This work cannot be performed this evening.  We will work to do this Tuesday evening next week at the same time.

    Update 
    Tue, 22 Oct 2013 06:39 AM UTC: Just confirming we will be starting on this in about 30 minutes.

    Update @0724: Changes have been implemented. Servers are all pinging. Please let us know if you are seeing any 'odd' networking issues in the Auckland DC.

     

     

  • Date - 15/10/2013 20:00 - 22/10/2013 19:27
  • Last Updated - 25/10/2013 13:40
Power Outage (Resolved)
  • Priority - Medium
  • We are seeing a power issue at the London data centre. This has caused some servers to go offline. The core network is currently coming back up and we'll update this notice with further info as the servers come online.

    UPDATE @14:33 PM UTC: Servers are now back online.

  • Date - 19/10/2013 03:16 - 21/10/2013 15:01
  • Last Updated - 19/10/2013 14:39
Connectivity Issue (Resolved)
  • Priority - Critical
  • Affecting System - London Data Center
  • We are seeing some network connectivity problems in the London datacenter, we are investigating.

    Update: Fri, 11 Oct 2013 20:07 PM UTC: The datacenter has confirmed an issue with an upstream network. They are working on resolving that ASAP.

    Update: Fri, 11 Oct 2013 20:57 PM UTC: It appears a switch has failed upstream of us, affecting the entire network. Engineers are working to replace that now.

    Update: Fri, 11 Oct 2013 21:08 UTC: A switch is being replaced. Expecting progress in 20 minutes...

    Update: Fri, 11 Oct 2013 21:29 UTC: Connectivity has been restored for now on our backup feed.

    Update: Fri, 11 Oct 2013 21:38 PM UTC: While connectivity has been restored to most of our servers, there are several still affected by this issue. The failed switch is still being replaced and once that is back we should have those last three servers back online.

    Update: Fri, 11 Oct 2013 22:54 PM UTC: The replacement switch has been racked, however the network is still not working. Network technicians are trying to isolate the cause of that now. In the meantime we are requesting an interim solution to restore connectivity for those last few servers.

    Update: Fri, 11 Oct 2013 23:09 PM UTC: The last three servers are now responding.

    Update: Sat, 12 Oct 2013 00:47 AM UTC: We are seeing some short periods of connectivity loss to those three servers. This is likely due to work restoring the network to its previous state. Normal service should be fully restored soon.

    Update: Sat, 12 Oct 2013 03:30 AM UTC: The network appears to be stable. We have received the following initial root cause analysis report from the datacenter. We are expecting a full report from the datacenter on Monday.


    "The underlying root cause of tonight's issues has been identified as a fibre break. This caused excessive flapping on the E1 transit ring which resulted in several of the ring switches rebooting. Unfortunately one of the switches did not recover - this then led to severe network instability which proved difficult to pinpoint."

     

     

  • Date - 12/10/2013 08:03 - 12/10/2013 15:30
  • Last Updated - 12/10/2013 22:06
Connectivity Issue (Resolved)
  • Priority - High
  • Affecting System - London Data Center(s)
  • We are currently experiencing connectivity issues in London. We are investigating and will update this notice with any news.

    Update from data center @0852UTC: We are aware of a network related issue and the Network Team are currently investigating this.

    Update @0911UTC: Network seems to be back to normal in the Maidenhead DC. Host1 is currently unavailable - affecting v1,v2,v3 VPS's. Still waiting for update from the network team. 

    Update @0959UTC: Edge servers on this network are now accessible. Awaiting root cause analysis report from datacenter.

    Root Cause Analysis Report

    On the morning of monday the 7th of October. We became aware of a routing instability that affected some servers, this issue resulted in the loss of connectivity to some dedicated servers.

    The issue had two symptoms: Loss of traffic to the primary IP of the server and in some edge cases, further loss of traffic to any secondary assigned IPs. We have now after extensive engineering and diagnostic work with our router vendor identified this issue as a problem in the firmware that one of our routers was running.

    We have deployed this fix and based off further testing we can say this issue has been fixed. Once again we apologise for the inconvenience caused by this issue and thank you for your patience with us during the problem. 

  • Date - 07/10/2013 22:10
  • Last Updated - 12/10/2013 22:04
Emergency Maintenance (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • We have scheduled an emergency maintenance of westmere.createhosting.co.nz to begin at 23:00 NZST 24/09/2013. We anticiapte this to take 30-60min. This is phase one of a major update. This phase will allow us to clone the existing architecture and perform the update in a safe environment to check for any issues so that they can be mitigated in the actual/live update. During this window, the VPS will be paused and some services will not be available.

    Update: This will be repeated at 23:00 NZST 26/09/2013.
    Update: This will be repeated at 23:00 NZST 28/09/2013
    Update: This will be repeated at 23:00 NZST 02/10/2013. Immediately after, our system will undergo maintenance for a scheduled major upgrade. Please allow 60 min - 120 min. The VPS and services will be paused or disabled at certain stages through the upgrade process.
    Update: The upgrade process failed and server had to be rolled back to the backup performed at approx 23:00 02/10/2013. There was no alternative unfortunately. We will investigate the cause and we plan to repeat the procedure or an alternative in the near future.

     

  • Date - 24/09/2013 23:00
  • Last Updated - 10/10/2013 21:30
LVM Partition Resize Causing Outage (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • A scheduled LVM disk partition resize is taking longer than expected. While this usually completes without disruption to services, in this instance the server has become unresponsive. We believe this is likely due to high I/O. Technicians are investigating.

    Update: Resize scripts triggered a full disk backup as a precautionary measure prior to the resize. This has caused high I/O. We will need to let the backup complete and resize complete. Estimated time remaining 10 min.

    Update: Backup completed, disk resized. Load has dropped to acceptable levels. We will investigate resize scripts to mitigate this for future occurances of this systems administration task.

  • Date - 29/08/2013 23:11 - 29/08/2013 23:35
  • Last Updated - 04/09/2013 13:38
Emergency Maintenance (Resolved)
  • Priority - High
  • We have scheduled an emergency maintenance of host2.westmere.createhosting.co.nz to begin at 07:00 NZST 19/07/2013. We anticiapte this to take 1 hour. This affects the following VPS's:

    • vm1.createhosting.co.nz
    • vm2.createhosting.co.nz
    • vm3.createhosting.co.nz
    • vm4.createhosting.co.nz
    • vm5.createhosting.co.nz
    • vm6.createhosting.co.nz
    • vm7.createhosting.co.nz
    Update: This was successfully completed with an upgrade of Centos and Xen Hypervisor. Further tests were performed on the Blade Server IMPI card to ensure remote power cycling can be performed in the event of an emergency.

  • Date - 19/07/2013 07:00 - 14/08/2013 21:54
  • Last Updated - 19/07/2013 18:21
Outage on Host Server (Resolved)
  • Priority - High
  • We have a reported outage to the VPS Host server for vm1.createhosting.co.nz - vm7.createhosting.co.nz. This is currently being investigated.

  • Date - 15/07/2013 01:57
  • Last Updated - 19/07/2013 18:17
Network Upgrade (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Scheduled Wed 29th May 2013 02:00 - 03:00 NZST

    A network equipment upgrade is scheduled for the Auckland data center that we use. Extra space is being added for more VPS hosts and more dedicated servers. New switches have been purchased, installed and configured. The new setup improves redundancy and enables us to support more features (e.g. excluding intra-data center traffic from your data transfer allowance) and adds capacity (when you are out of network ports you cannot add much gear!).

    The new gear is already in place, but we will need to physically move the server cables over to the new switches. This requires an unplug at the old switch and a replug at the new switch. No changes are required to the servers themselves.

    A one hour maintenance window has been set aside, between 2am - 3am NZ time on Wed 29th May. We anticipate the actual impact will only be about 2-5 minutes of network disconnection while cables are moved and network routes update.

  • Date - 29/05/2013 02:00 - 07/06/2013 16:56
  • Last Updated - 27/05/2013 17:59
Long Running Snapshot (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • One of our servers is experiencing high load and consequently slower response times. We will investigate the cause and update this notice.

    Update: A number of high I/O cronjobs (scheduled tasks) were set to run at the same time as the weekly backups (LVM snapshot). Combined with a DDoS Wordpress brute force attack, it potentially had a compounding effect on load. We hope to have mitigated that on all levels and don't expect that to recur. All services were restored at approx 17:35.

  • Date - 26/05/2013 17:00 - 26/05/2013 17:32
  • Last Updated - 27/05/2013 17:13
Unplanned Outage (Resolved)
  • Priority - Low
  • Affecting System - Auckland Data Center
  • Our monitors picked up a 5 minute outage on connectivity to our NZ servers. All services are working normally again now and we are investigating; awaiting a report from DC staff.

    Update: This appears to be a routing issue and is scheduled to be addressed on 29/05/2013

  • Date - 22/05/2013 17:32
  • Last Updated - 27/05/2013 17:02
Security Update (Resolved)
  • Priority - Critical
  • Affecting System - All Servers (UK and NZ)
  • We will be applying an URGENT security update to all VPS and Dedicated servers that will require the servers to be rebooted. We have only just recently been made aware of the security advisory and have decided to implement immediately.

    Downtime will be around 5 minutes and the work will start around 11AM NZST 17/05/2013.

  • Date - 17/05/2013 09:00
  • Last Updated - 22/05/2013 17:54
London Datacenter 1 (Resolved)
  • Priority - High
  • Affecting System - Host Servers (HOST2 and HOST3)
  • We are seeing some connectivity problems on London servers right now. We are investigating...

    Update: Wed, 15 May 2013 05:09 AM UTC: Service has been restored. We are still trying to establish what occurred.

    Update: Wed, 15 May 2013 05:19 AM UTC: A network technician accidentally null-routed some key IPs, that caused parts of our network to become unresponsive. We have asked them to make some records so this does not recur.

  • Date - 15/05/2013 16:40 - 17/05/2013 11:00
  • Last Updated - 16/05/2013 09:27
XEN Hypervisor Upgrade (Resolved)
  • Priority - Critical
  • XEN Hypervisor Upgrade
    Scheduled 5AM CET (3PM NZT) 26/04/2013

    Fri, 26 Apr 2013 09:29 AM UTC: Server is back up with a similar but slightly different, updated hosting stack as well as general package updates. All VPSs should be responding again now, with the original network configuration.

    Fri, 26 Apr 2013 09:16 AM UTC: We have managed to track down an old copy of the original host packages, and will test those now.

    Fri, 26 Apr 2013 07:46 AM UTC: Testing some different kernels, unfortunately we have not been able to locate the exact kernel used before, so are having to find a workaround.

    Fri, 26 Apr 2013 06:00 AM UTC: Experiencing problems connecting to the console, and slow turnaround on remote hands. Also seeing this server take a long 
    time to start.

    host.westmere-uk.createhosting.com was scheduled for an upgrade, however after the upgrade there were a number of issues encountered, including the new kernel not starting normally. We have opened this notice to track and document the issue.

  • Date - 26/04/2013 15:00 - 29/04/2013 13:58
  • Last Updated - 26/04/2013 22:18
VPS Host Server Outage (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • Host server "host2.westmere-nz.createhosting.co.nz" is not responding normally. Technicians are investigating and may have it restarted. This will affect all vm*.createhosting.co.nz VPS servers.

     

  • Date - 11/04/2013 19:25
  • Last Updated - 26/04/2013 19:54
Packet Loss (Resolved)
  • Priority - Low
  • Affecting Other - Auckland Datacenter / NZ/AU
  • Update @ 01:38 AM UTC/2:38 PM NZST: We have not seen anymore trouble with this and are still waiting for information on the cause of the packet loss.  We believe it was due to a DDoS/network flood, but this has not been confirmed.

    Update @ 19:56 PM UTC/8:56 NZST: Network connectivity seems to be stable now, waiting for confirmation from network team on what caused the issue.

    We are seeing some packet loss at our Auckland based facility, we are investigating now.

  • Date - 11/03/2013 08:35
  • Last Updated - 11/03/2013 14:42
Emergency FSCK / MySQL Check (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • Sun 24th Feb 16:36 NZST


    Server westmere.createhosting.co.nz has encountered a major filesystem failure and has to be taken offline in order for technicians to perform an emergency fsck and restore.

    DNS services are still operating.

    Unfortunately, we do not have an ETA at this time. We will update this thread accordingly once we have more information. We apologize for the inconvenience and as always, we thank you for your patience during this time.

    Update: 23:50 NZST. All services are back online. Performance will be degraded while all databases are checked for errors. This will take a few hours to complete. A full update / root cause analysis will be provided once we have all information back from technicians.

    Update: 08:00 NZST. The main VPS image is running off secondary hardrives. Performance is temporarily degraded. We will need to stop the server at some point and move that image to an LVM volume with the optimal hardware RAID configuration utilising the higher performance SAS drives.

    Since the image is quite large, that will take some time. We will schedule the most suitable time and update this notice accordingly.

    Update: 11:40 NZST. We will be taking the server offline at 12:00 NZST in order to restore the filesystem to the RAID configuration, after which performance will be back to normal. Downtime is exptected to be 30min - 1 hour.

    Update: 12:46 NZST. Restore operation to RAID completed and all services back online (36min). 

  • Date - 24/02/2013 16:36 - 24/02/2013 23:50
  • Last Updated - 27/02/2013 11:20
OS Security Updates (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Sun 24th Feb 2012 16:30 NZST

    In response to a zero day security advisory, as a precautionary measure we will be performing a number of unscheduled security updates to key servers in the Auckland Data Center. No downtime is expected.


  • Date - 24/02/2013 16:30 - 25/02/2013 02:10
  • Last Updated - 25/02/2013 02:07
Network Maintenance (Resolved)
  • Priority - Low
  • Affecting System - Auckland Datacenter / NZ/AU
  • Feb 27 2013: 06:00AM - 07:00AM UTC (7PM - 8PM NZST)

    Work is being conducted on UPS systems by a third party connectivity provider within the Sky Tower at Auckland.

    This work will result in a link outage of up to 30 minutes during the outage window due to the requirement for upstream switching equipment to be temporarily shut down. This will impact our primary connectivity to the Auckland Peering Exchange.

    We have in place secondary and tertiary connectivity and are not expecting this work to have a significant impact. However, during this time NZ local traffic performance may be temporarily degraded during the transition to non-primary connections.


  • Date - 27/02/2013 19:00 - 11/04/2013 19:34
  • Last Updated - 23/02/2013 14:12
Routing Issue (Resolved)
  • Priority - Critical
  • Affecting System - Auckland Datacenter / NZ/AU
  • There is currently a routing issue with at the Auckland datacenter. Packets stop at the Vocus upstream provider. Unfortunately Vocus holds the fiber coming into NZ, so failover to another provider in this instance isn't possible.


    Update Sun Dec 9 03:00:26 UTC 2012: There are simultaneous hardware failures at Vocus, engineers are onsite. ETA is initially set at 15 minutes, but they have been set back. We are seeing that international routes are mostly working now, but there are some issues with propagation.

  • Date - 09/12/2012 14:44 - 09/12/2012 15:06
  • Last Updated - 10/01/2013 12:07
Network Outage (Resolved)
  • Priority - High
  • Affecting System - Auckland Datacenter
  • It looks like upstream connectivity failed a short time ago. Normal services appear to be restored. We are still investigating the root cause.

    Update: Network has been failed over to a secondary peer to restore network service. Apparently a device at Sky tower (Auckland, NZ) stopped forwarding traffic.


    Update: We received the following detailed incident report:

    At approximately 3.50pm NZT on the 27th November 2012 we experienced a loss of our primary International and Domestic Internet IP Transit.

    The loss of this service was a result of a power issue to the rack we have our switching equipment collocated with in the Sky Tower.

    The power loss appears to have been very brief however as this caused the switch to restart it takes approx. 10-15 minutes for the switching device to come back online.
    All services were fully restored by 4.04pm NZT.

    During this outage our secondary IP Transit service links were offline for maintenance and our network was unable to automatically re-route services.

    Our engineers started to bring the secondary links online, however the primary services came back online prior to the backup IP transit links coming back online.

    We apologise for the downtime experienced and we are in discussions with our upstream providers to investigate the cause of the power fluctuation.

    Our engineers are also completing the maintenance of the secondary IP Transit links and we estimate to be bringing these back online by 5pm today.

  • Date - 27/11/2012 16:13 - 28/11/2012 00:00
  • Last Updated - 06/12/2012 10:59
Network Maintenance (Resolved)
  • Priority - High
  • Affecting System - Auckland Datacenter
  • Our provider in Auckland will be performing some maintenance which will cause some brief network outages. The nature of the work is VLAN maintenance required on core routers. 


    This work is scheduled for 12th December at 10pm NZST. See http://timeanddate.com/worldclock/fixedtime.html?iso=20121212T0900 

    We expect a few brief network outages over a 30 minute period.

  • Date - 12/12/2012 22:00 - 10/01/2013 12:06
  • Last Updated - 06/12/2012 10:58
Core Network Maintenance (Resolved)
  • Priority - High
  • Affecting System - Auckland Datacenter
  • We have been informed by our datacenter that there is a emergency maintenance change, affecting the core network for the servers. The scheduled window is on 27th November 2012 10:00pm to 28th November 2012 2:00am NZT.

    The expected downtime is up to 15 minutes to perform the change.

  • Date - 27/11/2012 22:00 - 06/12/2012 10:59
  • Last Updated - 27/11/2012 16:29
Network Outage (Resolved)
  • Priority - High
  • Affecting Other - Auckland Datacenter
  • There are connectivity problems at the Auckland datacenter. We are contacting the datacenter staff and working on resolving the issue as soon as possible. 

    We will update this maintenance notice with any news. 

    Update Wed Nov 14 03:25:28 UTC 2012: Connectivity is back and it seems to be a routing related issue. We will get a confirmation on the cause of the problem.

    Update Fri Nov 16 2012 03:13:17 AM UTC: From the datacenter... "We had an issue with our Juniper Core Network. We are currently reviewing what happened to one of the switches in the stack, and why one switch took out the entire stacked network that should have been redundant." Options are being discussed with Juniper consultants to identify the best solution.

  • Date - 14/11/2012 15:54 - 14/11/2012 16:19
  • Last Updated - 16/11/2012 16:24
Network Outage (Resolved)
  • Priority - High
  • Affecting System - UK Data Center (Maidenhead)
  • A few servers are unreachable at the London data center. We're investigating and will update this notice with further info as we have it.

    Update: Servers were restarted. we're still waiting for an explanation from data center.
    Update: Some of the servers are back online. Data center technicians tell us it's a PDU bar issue. They are working on resolving it now.
    Update: All servers back online.

  • Date - 11/10/2012 00:05 - 27/10/2012 15:35
  • Last Updated - 27/10/2012 15:35
Network Outage (Resolved)
  • Priority - High
  • Affecting Other - Auckland Datacenter
  • We are currently experiencing network issues in Auckland.  We are investigating and will update this shortly.

    Update: The data center report that it is a networking issue and that they are working on it right now.  

    Update: Engineers working on issue. Internal routing issue between core and DC switch stack at this stage. Recovery coming.

    Update: servers are pinging again.  If you are still having issues please come to live chat and we can check for you.  Will also update this with an incident report as soon as we get one.

    Update: The network engineers report that one of the routes in their switch for their firewall failovers was incorrect (and triggered by a failure in a firewall).  They are combing the routers for any other incorrect rules.  They assure us this is a one off event.

    Data center incident report:

    At approximately 11:15AM 30/07/2012 our network began experiencing an internal network failure.
    A number of subnets on the HD network started experiencing sustained connectivity loss. HD network engineers started investigating the root cause immediately to identify where the issue was residing that was resolved at 12:10PM.

    The HD external network was online, upstream was accepting our BGP from all four redundant routing paths.
    Upon further investigation, it was determined that the root cause of the issue was a rogue interconnect subnet between our edge routers (PE) and the customer facing Juniper Switch Stack (CE).

    Engineers had to scroll though CE (Juniper Switch Stack) datacentre facing switches to identify an undocumented rogue route to an IP address that FG1 failed to announce to CE correctly at 11:15AM, this was triggered by routine change in the Fortigate in question that triggered CE to static route to this IP that should have no longer been active within our network, we accept this was an oversight on our part.

    HD takes this event very seriously and is taking proactive steps to eliminate any future possibility of service disruption.
    Network engineers have begun performing a thorough audit of all interconnect HA fail-over IP's and customer facing VLAN networks to ensure that this type of oversight can never happen again.

    We expect the audit to be completed within 72 hours and in the meantime we are actively monitoring our network for any more potential issues.

  • Date - 30/07/2012 11:26 - 02/08/2012 14:35
  • Last Updated - 30/07/2012 15:51
Network Outage (Resolved)
  • Priority - Critical
  • Affecting System - Data Center
  • Some of our clients are experiencing some trouble connecting from NZ to their servers. Datacenter staff are working on fixing that now.

    Update 13/07/2012 6:20PM NZT: The issue only appears to affect Telecom customers connecting to IPs in the 60.234.72.0/24 range. We have asked upstream to investigate and resolve ASAP.

    Update 13/07/2012 7:40PM NZT: Update: We have found the issue is also affecting some other New Zealand ISPs. We have passed these details on to upstream also.

    Update 13/07/2012 9:23PM NZT: Update: We have spoken to datacenter staff. They have been working on the issue since they were made aware of it and are doing all they can. Unfortunately at this stage we are waiting on the datacenters old upstream provider to resolve the issue.

    Update 13/07/2012 10:07PM NZT: Update: We have been informed that it is not be possible to restore these IPs (in the 60.234.72.0/24) due to issues with the old upstream provider (Orcon), despite the fact that we were guaranteed availability of this IP range until 22 July 2012. We are forced to migrate to the newly assigned IP addresses immediately. We will be notifying all affected customers as soon as possible to arrange the updates.

    Update 14/07/2012 03:55AM NZT: Update: Migration to the new IP range has been completed. 


  • Date - 13/07/2012 06:28 - 15/07/2012 08:00
  • Last Updated - 16/07/2012 14:13
60.234.72.0/24 IP addresses (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Update Monday July 16 10:26AM NZST:

    At approximately 6pm Friday July 13 all of our legacy 60.234.x.x IP addresses stopped working.

    Soon after we contacted our upstream provider, HD. They were aware of the issue and had begun working with their upstream providers to isolate the issue. It was quickly determined that Orcon (who own these legacy IPs) had null-routed the IP addresses.

    The immediate impact was that customers with Telecom, Orcon and other New Zealand ISPs were not able to reach these IPs. Some international routes were also affected. Traffic for these IP addresses entering the New Zealand via Vocus (one of HDs upstream providers) appears to be unaffected.

    All of our other IP addresses, including those from the newly allocated range, were unaffected.

    The null-route remains in place. Orcon are NOT willing to remove the null-route. We are seeking an explanation for the null routing from both HD and Orcon.

    We will NOT able to restore these legacy IP addresses. We had no choice but to unexpectedly and with no notice from Orcon migrate away from those IP addresses on Friday night. If you need assistance with any issues please open a support ticket immediately.

    Update Monday July 09 10:35AM NZST:

    Unfortunately we will not been able to retain these IPs. We will begin the renumbering shortly and follow up with each customer once we have allocated the new IPs. We apologise for any inconvenience this may cause.

    Summary:

    We may need to change IPs in these ranges:

    • 60.234.68.208/28
    • 60.234.72.0-255

    Those IP ranges will definitely be available until 22 July 2300 NZT. No action is required from you at the moment. This is a heads up only. We will be co-ordinating any changes you may need to make over the next few days.

    Detail:

    Yesterday (Sun 1 July) the Auckland-based data center we use rejigged its network setup: Changing providers. Changing the peering points. Changing the redundancy and fail over.

    The changes are now in place and we are seeing some improvement in latency (because peering is now more direct). Part of the changes involve Hosting Direct not using Orcon (as Hosting Direct have blamed them for some of the recent outages).

    Some IP ranges in use by our customers have been allocated to us by Orcon. Today we were notified that these IP ranges may not continue being available. The 'may not' means we have been told they will not be available after 22 July, but we are still looking to see if there are any other options available to keep them. At any rate there will be no change required before 23 July. Right now we are working to see what our options are for retaining those IP ranges.  

    If we can do that, then no changes will be required. If we are not able to keep those IP ranges, then our plan is to:

    • Set DNS TTL to a low value
    • Add and configure new extra IPs to our servers (from a range directly allocated by APNIC).
    • Configure all primary, seconday and tertiary name servers to use the new DNS records
    • Notify all affected clients to update externally pointing A records to the new IP's

    We will work to ensure that any changes are done as seamlessly as possible and have the smallest impact on you.

  • Date - 09/07/2012 15:55 - 26/07/2012 21:02
  • Last Updated - 16/07/2012 11:13
Scheduled Maintenance (Resolved)
  • Priority - High
  • Affecting Other - Auckland Datacenter
  • Period: 01/07/2012 05:00 - 01/07/2012 06:00 NZST (30/06/2012 17:00 - 30/06/2012 18:00 UTC).

    Update @ 9:53:29 AM NZST: Connectivity has stabilized now, and routes from the NZ networks are also checking out ok now. We will continue monitoring, please open a ticket if you are having any trouble.

    Update @ 8:57:39 AM NZST: Possibly seeing some routing issues from some NZ networks to certain IPs, we are checking this now.

    Update @ 8:28:09 AM NZST: Datacentre reports issue should be resolved and we should not see anymore packet loss.

    Update @ 8:12:30 AM NZST: Datacentre reports still seeing some routing issues, working on it.

    Update @ July 1, 2012 7:52:03 AM NZST: We are seeing some packet loss in Auckland after this maintenance, we are investigating now.

    ===

    The datacenter is making fundamental changes to it's Core Network, they have been planning and implementing these changes for the past 3 weeks, and expect no more than a few 10 minute outages.

    The maintenance is designed to increase fault tolerance and insure that failover to different upstream providers is working reliably. Also, new international capacity is being configured. The changes are expected to improve latency and reliability of the network.

  • Date - 01/07/2012 00:00 - 07/07/2012 22:50
  • Last Updated - 01/07/2012 10:15
Network Outage (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • Our monitoring has alerted us to several machines not responding at the Auckland datacenter. We are investigating.

    Update Tuesday, June 12, 2012 12:15:32 AM UTC: Servers are responding again. There appears to have been a networking issue. We will post additional details once we receive an outage report from the datacenter.

  • Date - 12/06/2012 11:45
  • Last Updated - 01/07/2012 10:14
Network Outage (Resolved)
  • Priority - Critical
  • Affecting Other - London Datacenter
  • Update: Pulsant have now issued the following RFO:

    "Pulsant would like to sincerely apologize for the power disruption that affected our customers at the Maidenhead site on Tuesday 29th May. We would also like to apologize and make a correction to the timeline stated in the report. The actual power disruption occurred at 22:08 and not 22:38 as previously stated. A further apology must also be made for the confusion that resulted from the initial incident being reported as a Network related issue which was factually incorrect and was in fact a Power Supply related issue as a result of the UPS failure. There are a significant number of lessons learnt in terms of improved communications.

    The UPS units at the Maidenhead data center facilities 2 & 3 are approaching 5 years old and manufacturer recommended maintenance including schedule components replacement alongside routine offline health checks. Two separate maintenance windows were scheduled according to company policy, to be executed under Pulsant data center management responsibility and supported by certified and approved external UPS engineers from UPSL Limited. All replacement parts were genuine manufacturer certified components.

    During the first of these maintenance windows which started at 21:00 on May 29th 2012, a number of UPS components were replaced to meet manufacturer guidelines. A UPS unit was removed from service by placing the UPS unit into bypass. The parts were successfully replaced and tested offline. The UPS unit was returned to the parallel UPS solution at Maidenhead. However, on the return to service, there was a failure of one of the replaced components which disabled the control function and disabled the load management capability of the UPS unit. This resulted in all racks connected to this UPS unit losing power immediately. All other units on the parallel string remained active on UPS protected support. An immediate decision was taken to place the electrical load into bypass to enable the affected clients to be recovered but all Maidenhead 2 & 3 data centres would be operating on unprotected mains power supply. This was very poorly communicated.

    The engineers immediately identified the failed component and replaced this. However, on validation testing the engineers confirmed there had been potential damage to the UPS control board which is critical to management of the unit. This was tested and found to be faulty. The replacement of this component has to be to an exact specification given it must communicate with all the other UPS units in the parallel string. If there are any incompatibilities, this could result in UPS control issues. As the control board required replacement at the exact specification, investigations proceeded offsite but confirmed that one was not immediately available. It became apparent a suitable board would have to be shipped from the manufacturer in Switzerland. After risk assessment, a decision was taken to upgrade all the control boards. However, due to the impact of having to replace all the control boards simultaneously, a decision was taken to operate in an unprotected state of mains bypass during 30th May whilst all necessary parts were fitted and tested prior to UPS reinstatement. This involved load bank testing of the UPS units for 3 hours. All UPS units were successfully reinstated at 22:00 on 30th May 2012. The investigation and analysis works are continuing with Newave and UPSL.

    The Maidenhead UPS service had been operating without issue since the last service issue on 17/03/2011 but required scheduled maintenance to maintain manufacturer support. It has been regularly and successfully maintained on 3 separate occasions since 01/03/2011. The failure of the UPS was a direct result of the failure in one of the serviceable components which was replaced as recommended by the UPS manufacturer.

    The failure of the component resulted in both the UPS load being unsuccessfully discharged to neighboring UPS devices, and damage to the control board. The failed component and the control board were immediately couriered to the manufacturer testing facilities in Switzerland where they are still undergoing analysis. To ensure complete continuity in UPS controls, it was essential we align the firmware revisions on the control boards so it was necessary to review all UPS devices and their associated contactors to minimise reinstatement risk and any possible disruption during future maintenance events."

    ---

    Update: We have received a partial update from Pulsant (who own and operate the datacenter). We are still waiting on an official RFO:

    "Summary

    The 6 Eaton Powerwave units supporting the Pulsant 2 & 3 facility are approaching 5 years old. Under Pulsant management, we have already completed and brought up-to-date the maintenance on these devices with UPSL, whom are the only authorised Powerwave support partner in the UK at present. At the last maintenance window, we successfully replaced a large number of the batteries as a precautionary measure following testing on battery performance. The maintenance schedule as published by Powerwave manufacturing in Switzerland recommends the replacement of specific Low Voltage fuses at 5 years of age. We therefore scheduled maintenance windows over 2 separate nights. The UPS devices are installed in a parallel configuration, supporting N+2 redundancy overall or N+1 on each parallel string and the load is distributed throughout the datacentre across all 3 phases and the six UPS units.

    UPS Maintenance

    The maintenance sequence removes one UPS device at a time still providing N+1 redundancy to one string but only N during maintenance to the other string. The UPSL engineer attended site with the approved manufacturer parts and we commenced maintenance on the planned first 3 UPS units on time. We put UPS unit 1 into bypass, completed maintenance successfully, tested this and returned this unit to service. We placed UPS unit 2 into bypass, completed maintenance successfully, tested this and returned this to service. The UPS took the load at 22:38 and a Low Voltage fuse which had just been replaced as part of the scheduled maintenance subsequently failed. UPS 2 shed the IT critical load at 22:40 but failed to disengage from the parallel configuration resulting in the partial loss of 2 phases of power. UPS units 4,5 and 6 were unaffected.

    UPSL investigated and replaced all Low Voltage fuses and the inverters as a precaution. This was escalated to PowerWave support in Switzerland at 23:32. On their recommendation, a number of components were replaced. This took 4.5 hours to complete but there was insufficient time to load bank test the configuration to our satisfaction to approve a UPS reinstatement. The risk assessment was completed at 5AM and a decision was taken based on discussions with the SSE which confirmed no planned engineering works on the grid network supplying Maidenhead 2&3.

    Current Position

    UPS 2 has subsequently been placed into electrical bypass. UPSL and PowerWave Switzerland are continuing to validate the test results from the unit throughout today to provide sufficient confidence in service reinstatement from 22:00 today. We will only reinstatement the device if this can be achieved or the alternative is UPSL are arranging a unit exchange with a new unit. We are monitoring the power position in detail and actively working with UPSL and PowerWave on the reinstatement plan and approach. We will provide a further customer broadcast later today to confirm the exact arrangements for reinstatement."

    ---

    Original notice:

    We are experiencing connectivity issues at the London datacenter, we are currently investigating the issue and contacting the data center staff.We will update this maintenance notice with any feedback we receive.

    UPDATE 21:25:38 UTC: London data center staff are looking at the issue

    Update 21:47:04 UTC: The data center report they "are currently experiencing power issues affecting some dedicated servers - located". We are seeing some servers coming back.

    Update 22:22:40 UTC: Core network has started, most of the servers are coming up.

    Update 01:16:13 UTC: The data center lastest report: "Engineers on-site have restored power to the affected servers and have completed the re-powering of the servers". We are just dealing with a couple of dedicated servers for customers at the moment that failed to boot.

    Update 0500 UTC: Normal service has been restored. We are still waiting for an official account of the incident, this page will be updated when we receive that.

  • Date - 30/05/2012 05:45 - 30/05/2012 23:45
  • Last Updated - 14/06/2012 21:44
Network Outage (Resolved)
  • Priority - Critical
  • Affecting Other - London Datacenter
  • Thursday, May 31, 2012 3:07:04 PM UTC

    We're seeing significant packet loss in our London data center. We are investigating and we'll post updates about the issue here.

    UPDATE: The network issues is due to an external issue with a major UK internet peering point which is affecting ISPs across the UK. This is unfortunately outside of our provider's control, but the issue is being monitored. We'll provide further updates later.

    Update: Network staff report that things are now stable again. But will be closely monitored for further issues.

  • Date - 01/06/2012 03:13 - 03/06/2012 00:00
  • Last Updated - 03/06/2012 23:59
Scheduled Maintenance (Resolved)
  • Priority - High
  • Affecting Other - Auckland Datacenter
  • Period: 09/05/2012 22:00 - 10/05/2012 02:00 NZST (09/05/2012 10:00 - 09/05/2012 14:00 UTC).

    (http://tinyurl.com/6s3nopp)

    A number of improvements have been scheduled on the network core affecting our Auckland servers. Expected benefits include improved routing and latency, and better peering. Also included will be the addition of a third peer to improve network redundancy.

    • Update BGP sessions
    • Test failover to secondary peer
    • Test failover to primary by bringing down secondary peer
    • Test failover to tertiary by bringing down primary peer
    • Turn off Vector fibre Layer1 to Vector
    • Test fibre cut simulation to Telecom
    • Test Juniper core crash

    We hope to only have 7 micro outages (5 minutes each maximum) during this event window, however there is a possibility of a longer outage (5-20 minutes maximum).

  • Date - 09/05/2012 22:00 - 01/06/2012 03:25
  • Last Updated - 06/05/2012 23:53
Servers Unreachable at Auckland Datacenter (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • Update @ 5:22:57 AM UTC: Issue is marked resolved, upstream provider gave us the following report:

    On 29th April 2012 at 15:31 (NZST), an unplanned outage occurred on the HDDC core network stack impacting customer services that were traversing this path. The problem was identified by network alarms and our engineers resolved the issue within 20 minutes.

    Root Cause Analysis:

    HD is in the process of investigating the root cause of this issue. Preliminary findings indicate that the outage was the result of an issue with one of our switches in the core network stack, the stack was rebooted, the network started operating normally.

    Corrective Action:

    We will keep a close eye on this stack over the next 48 hours and implement any changes or hardware replacements needed to keep our network running smoothly for our customers.

    ===

    Servers are not responding in our Auckland facility - technicians are investigating now.

    Update: Upstream investigating, should have more information in the next 15 minutes.

    Update @ April 29, 2012 3:55:49 AM UTC: Servers are responding again now, still awaiting further information on what happened.

  • Date - 29/04/2012 16:42 - 06/05/2012 23:04
  • Last Updated - 29/04/2012 21:05
Auckland Datacenter Power Maintenance (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • Maintenance Window: Sat 7/04/2012 23:30 - Sun 8/04/2012 00:15 NZT (Scheduled)

    Affecting System: Auckland Data Centre

    One of our upstream providers will be moving all Auckland datacenter machines on to new Power Distribution Units (PDUs). As there have been concerns over the reliability of the PDUs they will be replaced with a more reliable unit.

    This involves stopping all running dedicated and VPSs, powering down the physical hosts and then having them plugged into the new PDUs.

    During this time all servers will be unavailable.

    The actual change over to the new PDUs is estimated to take 5-15 minutes. It may take 20-45 minutes for the physical hosts and VPSs to start up after that.

  • Date - 07/04/2012 00:00 - 29/04/2012 16:46
  • Last Updated - 07/04/2012 23:26
Restarting Web Servers for Timezone Update (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Web servers will be restarted to pick the Daylight Savings time change for NZST. No interuption anticipated. Downtime 5-10 seconds.

  • Date - 02/04/2012 12:03 - 07/04/2012 23:24
  • Last Updated - 07/04/2012 23:24
Restarting Mail Server for Timezone Update (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Mailserver will be restarted to pick the Daylight Savings time change for NZST. No interuption anticipated. Downtime 1-5 seconds.

  • Date - 02/04/2012 11:59 - 07/04/2012 23:24
  • Last Updated - 02/04/2012 12:02
Auckland data center uplink maintenance (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Maintenance Window: Tue 27/03/2012 00:01 - 05:00 NZT (Scheduled)
    Affecting System: Primary Uplink

    One of our upstream providers will be replacing equipment on on
    our primary uplink.

    Impact: We have redundancy in place to cope with this. But there
    may be degraded performance and intermittent connectivity issues
    for up to 30 minutes during the event window.

  • Date - 27/03/2012 00:01 - 07/04/2012 23:25
  • Last Updated - 22/03/2012 16:33
Servers Unreachable at Auckland Datacenter (Resolved)
  • Priority - High
  • Affecting Other - Data Center
  • There appears to be a network issue at the Auckland datacenter. We are investigating.

    Update: All equipment is responding normally again. We are waiting on a outage report from the datacenter.

    Update: The datacenter were doing a route change, problems occurred after that. They noticed packet loss on their core stack. They rebooted their core Juniper stack. They report that the packet loss issue is resolved. The network should be responding normally.

    Update: The data center will be re-applying the route changes between 1800 and 2100 NZT tonight (NZT Wed). They will be calling in Juniper engineers to oversee the change. The routes happen to be for some new IP ranges we are bringing online.

    Update: routing change has been applied and things seem to be working properly now... uh, perhaps not quite. Will be requesting more information from upstream.

    Update: a firmware bug meant the datacenter core network devices had to be restarted with an upgraded OS. Which caused a further outage (about 10 minutes). We have been advised that the network is stable now.

    Note that there have been good 'discussions' with the data center about keeping future network changes out of core business hours.

  • Date - 14/03/2012 09:32 - 22/03/2012 16:34
  • Last Updated - 14/03/2012 21:33
Auckland Datacenter Packet Loss (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • 1424 UTC: We are seeing significant packet loss to most of our Auckland servers at the moment, investigating.

    Update 1437 UTC: all IPs are back, after network was failed over to separate upstream. Still checking to see what happened.

    Update 1445 UTC: We have been advised that an upstream exchange (Vector albany exchange north shore) went offline. For some reason failover to working upstream (Telecom) took a while, we are investigating to see how that can be remedied.

  • Date - 06/03/2012 14:24 - 11/03/2012 22:23
  • Last Updated - 07/03/2012 09:16
Server Reboot Required (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • A server reboot will be performed at 23:30 in order to upgrade the Kernel. Estimated downtime 5 minutes. [completed]

  • Date - 11/02/2012 23:30 - 11/02/2012 23:35
  • Last Updated - 12/02/2012 00:49
London Datacenter Network Connectivity (Resolved)
  • Priority - Critical
  • Seeing a routing loop affecting a large number of our servers in London at the moment, investigating.

    Update 0245 UTC: We confirmed all our services up up behind our core switches, there is a routing loop further up the chain causing this. We have escalated this at the datacenter.
    Update 0250 UTC: Last update seen is the datacenter network team is working on this right now.
    Update 0310 UTC: Coming back online.
    Update 0315 UTC: It appears this was caused by a DDoS flooding an upstream device. The target has been nullrouted to mitigate that. Confirmed normal connectivity has been restored.

  • Date - 24/11/2011 15:43
  • Last Updated - 24/11/2011 16:21
London Packet Loss (Resolved)
  • Priority - Medium
  • We have seem some short networking outages at our London facility, we are investigating now.

    Still not sure what the problem is, network team will be in soon and will investigate further. All servers are responding, we saw a short outage about 21:20 UTC, and then a longer outage lasting several minutes at around 03:50 UTC.

    UPDATE: We noticed an short network outage (1 min or so) at about 10:48GMT today. The datacenter staff updated an general network report:

    We will be carrying out some work on our network to improve stability and avoid issues such as those seen on the 2/11/2011. The routers that host the default gateway for these servers are being replaced. It is not expected that there will be a complete outage longer than 10-15 seconds, however this is a legacy network so unforeseen interactions between our routers and your server may occur, causing localized or partial outages.

    We will be carrying out the work at 07:30 on Wednesday the 23rd of November 2011.

  • Date - 20/11/2011 16:43
  • Last Updated - 24/11/2011 16:11
DDOS Attack (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • We are currently seeing significant packet loss to our Auckland servers. We have been advised there is a DDOS underway there, technicians are working on mitigating that already.

    Update: awaiting confirmation; looks to be resolved.

  • Date - 15/11/2011 17:49 - 15/11/2011 20:03
  • Last Updated - 15/11/2011 20:02
Network Issues (Resolved)
  • Priority - Critical
  • We are seeing intermittent network issues at the London datacenter. We are investigating.

    Update: We located an issue with the way our internal routes were being updated when partial connectivity to our provider lost. We have put a fix in place that has restored connectivity. It should not cause any further issues.

  • Date - 09/11/2011 08:32 - 09/11/2011 09:12
  • Last Updated - 09/11/2011 10:02
Server Outage (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • We experienced a temporary memory overallocation issue due to an incorrect Kernel build since the recent hardware upgrade. The Kernel didn't recognise the full 64GB allocated to the server and was only able to utilise 16 cores instead of 24 cores.

    A new kernel build was applied and the server rebooted to take affect. We do not anticipate a recurrence.

    [Completed 21/10 17:29. Downtime 10 minutes.]

  • Date - 21/10/2011 17:19 - 21/10/2011 17:29
  • Last Updated - 21/10/2011 23:07
Server Hardware Upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Server hardware will be upgraded between the hours of 00:00 - 08:00 on Thu 20/10/2011. This is round five of a major
    performance upgrade.

    Because we are upgrading to 15K SAS drives, we are unable to hot swap our existing drives. During this time, access to services may
    be interrupted for your domain, however a 503 maintenance page will be displayed where applicable. We apologise for the short notice.

    Time: Thu 20 Oct at 00:00 NZDT (approx)
    http://www.evolomail.com/link.php?M=229466&N=3248&L=3219&F=T

    [Completed 05:10]

  • Date - 20/10/2011 00:00 - 20/10/2011 00:00
  • Last Updated - 21/10/2011 23:00
Servers Unreachable at Auckland Datacenter (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Saturday, October 1, 2011 9:38 AM NZDT

    Our monitoring has picked up servers not responding in the Auckland data center, we are investigating now.

    UPDATE: All servers are responding. The datacentre in Auckland was performing some routine upgrades and scheduled maintenance.  Due to a miscommunication we did not have an advance notice of the maintenance required posted.  We are working on improving our communication channels with our upstream providers.

     

  • Date - 02/10/2011 22:55 - 02/10/2011 22:58
  • Last Updated - 02/10/2011 22:57
Server Reboot Required (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • The server will need to be rebooted at 19:45pm 29/09/2011 to mount a drive. We apologise for the short notice.

    Time: Thursday 29th September at 1945NZDT
    Duration: we expect the reboot to take 3-5 minutes

  • Date - 29/09/2011 19:45 - 02/10/2011 22:57
  • Last Updated - 29/09/2011 17:38
Servers Unreachable at Auckland Datacenter (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • A few servers are unreachable at the moment at the Auckland data center. We are investigating the issue.

    Update: Notification says technicians are working on it there is a connectivity issue affecting a significant portion of the datacenter
    Update 2215 UTC: connectivity outage appears to be the result of a peering issue upstream. Datacenter technicians have escalated the issue appropriately.
    Update 2230 UTC: some gear is responding again, we have been advised there may be another short blip before full access is restored
    Update 2250 UTC: everything should be responding normally again. Please open a support ticket with us if that is not the case.

    #

  • Date - 28/09/2011 21:52 - 02/10/2011 22:52
  • Last Updated - 28/09/2011 23:55
Auckland Datacenter Server Move (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • We will be moving our servers at the Auckland Data Center we use to a new rack.

    Time: Monday 26th September at 0330NZDT
    Duration: we expect the move to take 60-90 minutes

    Our staff will ensure all servers are powered down gracefully before the maintenance starts. Then we will physically move them to the new location.  Once the move is complete we will check that all machines have network connectivity.

    During the move the servers will be unavailable.

    We are moving the servers from cabinets in the Orcon data center to cabinet in the Piermark facility (a couple of kms up the road).

    In the new location we have better control over networking in the cabinet and allow us to expand our network to scale with demand.  This will work in with new systems due to arrive at the end of the month.  Remote hands access will also be improved.

    There are no IP changes as a result of the server moves.

  • Date - 26/09/2011 03:15
  • Last Updated - 26/09/2011 10:03
Mail & SSH Services Intermittent (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • Issues with POP mail and SSH services reported at 19:24 and are being worked on by technicians. Server will need to be restarted at 20:00, expected downtime 5 min.

    [Update] Server failed to come back up. All services will need to be paused to prevent data corruption while we restore essential libraries files from latest backup image. No data will be lost.
    [Completed] 21:40 All services for westmere.createhosting.co.nz back online. Application services will be separated out to prevent a re-occurrence.

  • Date - 05/09/2011 19:27
  • Last Updated - 05/09/2011 22:03
Apache/MySQL Restart (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • An Apache and MySQL Server restart needs to be performed between 11:30 NZST and 12:30 NZST in order to apply performance improvements. MySQL will be optimised to utilise a further 16GB RAM. Website functionality is not expected to be be affected as Apache will be shutdown prior to MySQL restart. Expected downtime 30 seconds.

  • Date - 05/08/2011 11:30 - 05/09/2011 22:03
  • Last Updated - 05/08/2011 11:28
Scheduled Administration - Server Restart (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • Server will be restarted between 22:30 NZST and 23:00 NZST to increase RAID10 disk size and apply OS and other essential security updates. [Completed]

  • Date - 03/08/2011 22:30
  • Last Updated - 04/08/2011 09:28
MySQL Not Responding (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • MySQL Server reported to be unresponsive. Down for approx 2 min. Investigating. MySQL optimisation will be performed tonight. All databases tables will be checked for errors. This may take up to 60 minutes at which time the server performance wil be reduced.

  • Date - 03/08/2011 06:48 - 03/08/2011 10:06
  • Last Updated - 04/08/2011 09:26
International Packet Loss (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • High packet loss on International Traffic has been reported from 1) NJ, USA 2) Montreal, Canada and 3) Cape Town, South Africa. This seems to have been an issue throughout the day. We will check this issue with NZ datacenter staff.

    21:30 UTC: issue escalated at the datacenter. Please note that NZ national traffic does not appear to be affected.
    21:50 UTC: issue appears to have subsided somewhat. We are still monitoring this and waiting for an update
    22:00 UTC: Confirmed that the international link is receiving a large DDOS attack. Technicians are actively working on this issue.
    22:20 UTC: Attack has been isolated and normal service seems to have been restored at this stage. If you are still seeing issues please submit a ticket with details.

  • Date - 03/08/2011 21:41 - 04/08/2011 00:00
  • Last Updated - 04/08/2011 09:25
MySQL Restart (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • MySQL service was restarted due to an unknown error. Service unavailable for approx 2 min. We are investigating the cause.

  • Date - 20/07/2011 12:48 - 03/08/2011 21:38
  • Last Updated - 20/07/2011 13:00
Registrar Network Outage (Resolved)
  • Priority - Low
  • Affecting Other - .NZ Registrar
  • As most of you are aware, Distribute IT's systems are currently offline due to a deliberate, premeditated and targeted attack on their Network (completely isolated from Create Hosting).

    This impacts domain renewals and UDAI issuance for .nz domains. Provisions are in place with .nz registry to allow extensions on renewal dates whilst Distribute IT resolve the issue. We apologise in advance for the inconvenience and will work with Distribute IT's emergency support staff to domain renewal requests outside of the automated system.

    Domain UDAI's are not able to be issued at this time until further notice (out of our control). For more informaiton please see http://distributeit.com.au/.

     

  • Date - 11/06/2011 09:42 - 15/07/2011 21:15
  • Last Updated - 15/07/2011 21:15
Public Network Connectivity (Resolved)
  • Priority - Medium
  • Affecting Server -
  • Date: Tuesday, 28 June 2011
    Start Time: 0700UTC / 0800BST
    End Time: 0730UTC / 0830BST
    Services affected: Public Network Connectivity
    Location: London (Maidenhead)
    Duration: 30 minutes

    During this maintenance window our network team will be working with our London provider to add capacity to our network.

    This will involve swapping over to new uplinks and some slight configuration changes on our providers end. It will cause loss of connectivity and/or packet loss for up to 10 minutes.

    We have selected this time as it is the earliest time in the day that our providers engineers are available. We will only be doing what is absolutely necessary to keep the impact to a minimum.

  • Date - 26/06/2011 22:14 - 03/08/2011 21:38
  • Last Updated - 26/06/2011 22:20
Memory Upgrade (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • A server upgrade is planned for midnight tonight, 16/06/11 00:00 NZDT. This will require a shutdown, upgraded Kernel and a reboot. Downtime expected to be between 15-30 minutes. [completed]

  • Date - 15/06/2011 00:00 - 16/06/2011 09:42
  • Last Updated - 16/06/2011 09:42
Orcon Datacentre Network Issue (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • We are seeing some packet loss to our NZ Data Centre. Technicians are investigating further.

    @UTC 0406 (NZT 1606) Data centre have reported that they have identified the problem and are working on a fix.  ETA 5 minutes.
    @UTC 0420 (NZT 1620): Issue appears to be resolved.  Awaiting report from Data Centre.

    Issue was reported as being caused by a misconfigured switch at the Data Center.

  • Date - 12/04/2011 16:13 - 12/04/2011 17:04
  • Last Updated - 18/04/2011 10:03
Orcon Datacentre Network Issue (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • We have recieved several reports of possible connectivity issues in our NZ datacentre. We are investigating that further now.

    Update 0115 UTC: We have been advised from the datacentre that there is a DDOS attack currently underway which is degrading service. Datacentre technicians are working to resolve in the next 5 minutes.

    Update 0135 UTC: Ongoing - datacenter technicians are continuing to work on resolving the DDOS issue.
    Update 0230 UTC: Ongoing - the DDOS appears to be mainly affecting internationally routed connections. Investigations are continuing.
    Update 0245 UTC: As of 0235 UTC, network stability appears to have been restored. Monitoring is ongoing and some IP ranges may still be affected. If you are still seeing an issue please let us know. A full incident report is pending.

    Update: Per the incident report we received the following details:

    The purpose of this email[sic] is to provide a time-line and identify the root cause of the network attack that occurred on April 8, 2011 which affected all clients at our Piermark Datacentre. The attack centred on the core network equipment which was bombarded by a DDOS attack originating from China. Clients experienced a wide range of issues = intermittent or inaccessibility to complete outages during the network event.

    Event Time: 12:30pm - 2:30pm

    ...

    What actions are being taken to prevent this from re-occurring?

    In the process of working with our upstream provider, we have taken the plan of action to diversify our upstream providers. At this time, we do not have a scheduled date as to when this will change but we will provide notice as soon as this change has taken place.

  • Date - 08/04/2011 13:07 - 08/04/2011 14:30
  • Last Updated - 18/04/2011 10:02
London Packet Loss (Resolved)
  • Priority - High
  • Affecting Other - Network
  • We are currently seeing a network issue in the London data centre, we are investigating now.

    We are still seeing intermittent packet loss on one switch in our London data centre.  We are investigating the cause.

    Update Saturday, March 26, 2011 10:43:17 PM UTC: The problem is isolated to a single cabinet. We are seeing errors on one of our uplinks that runs between that cabinet and our core. We have manually failed over to our backup uplink to restore connectivity. We are having technicians check on the primary uplink.

  • Date - 27/03/2011 12:36 - 08/04/2011 13:12
  • Last Updated - 27/03/2011 12:40
Server Temporarily Unavailable (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • Services (HTTP, SMTP, POP,MYSQL) are currently unavailable. Investigating. [resolved]

    Update: Services need to be disabled while a backup image is mounted to resolve binary linking issue.
    Update: Services restored.

  • Date - 24/03/2011 10:00 - 24/03/2011 10:35
  • Last Updated - 27/03/2011 12:32
Orcon Datacentre Network Issue (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Some customers are reporting intermittent service access to services at the Orcon Datacentre. Issues only effects NZ customers with some ISP's. Investigating.

    Update: It appears to be a routing issue. We are waiting for details from our upstream provider.

    Update: the data center is reporting an issue with networking gear. They are working on replacing core equipment related to this (and doing a upstream bandwidth upgrade to boot). Waiting to hear back from them about this and how or if the upgrade will affect our customers.

    Update: the data center has given us a heads up that they may need to do emergency network maintenance tonight at 21:00 (Friday 11/03 21:00 NZT). As soon as this is confirmed we will update this page.

    Update: data center staff have reported they will be upgrading the network this evening. From 2100 (Friday, NZT) to 0600. While they are doing this there may be some congestion/slowness on international routes. There also may be 'brief' (30 seconds to a few minutes) network outages as routes change.

  • Date - 11/03/2011 13:17 - 11/03/2011 14:36
  • Last Updated - 11/03/2011 13:18
Orcon Datacentre Network Issue (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • External monitoring is reporting intermittent service access (SFTP, MySQL, Apache). This has been reported as a network issue with the upstream bandwidth provider at the Orcon Datacentre; 2 min "downtime". [resolved]

  • Date - 11/03/2011 12:23 - 11/03/2011 11:57
  • Last Updated - 11/03/2011 11:56
DPS Transactional Issues (Resolved)
  • Priority - Low
  • Affecting System - DPS Payline
  • Some customers have reported that transactions processed via DPS Payment Express are not showing in DPS Payline and/or changes are not reflected. DPS have advised us that there is currently a 5 hour lag in transactions showing in DPS Payline.

    Customers need not do anything. All payments are processing as normal and will show in DPS Payline in due course. [Resolved]

  • Date - 09/03/2011 18:52 - 10/03/2011 00:00
  • Last Updated - 11/03/2011 11:53
Scheduled Maintenance - Orcon Datacenter (Resolved)
  • Priority - Medium
  • Affecting System - Network
  • During a recent power outage, datacenter staff had to plug some of our servers into an adjacent rack to get them back online.

    They now need to be plugged back into their original rack. To do this each machine will need to be powered down for a short time (1-5 minutes).

    We will do this at Tuesday December 14 at 8PM NZDT. [completed]

  • Date - 14/12/2010 00:00 - 14/12/2010 00:00
  • Last Updated - 15/12/2010 12:23
Orcon Datacentre Network Issue (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • External monitoring is reporting intermittent service access (SFTP, MySQL, Apache). This has been reported as a network issue with the upstream bandwidth provider at the Orcon Datacentre; technicians are working on the issue and should have it resolved shortly. [resolved]

  • Date - 01/12/2010 02:12 - 04/12/2010 23:33
  • Last Updated - 04/12/2010 23:33
Server not responding outside NZ (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Multiple servers in the Orcon NZ datacenter are not responding normally.  Investigating the issue now. NZ customers not affected. [resolved, awaiting feedback from techies at Orcon as to what caused the issue]

  • Date - 03/11/2010 22:47 - 03/11/2010 23:13
  • Last Updated - 22/11/2010 15:14
MySQL Restart (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • MySQL will be restarted to enable an important upgrade to take effect. Downtime will be minimal.

  • Date - 05/10/2010 00:00
  • Last Updated - 28/10/2010 16:06
SMTP Server Temporarily Down (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Some customers have reported problems sending mail via SMTP. This is due to a recent PCI compliance POSTFIX configuration change. We are currently working on this issue and should have it resolved shortly. [resolved]

  • Date - 27/10/2010 11:55
  • Last Updated - 28/10/2010 16:05
Unscheduled Outage (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • Orcon data center technicians have unintentionally tripped a power circuit while installing remote power monitoring devices. The power back online and staff are working on the problem at the data center.

    More information about this event: http://www.techday.co.nz/netguide/news/vector-blamed-for-orcon-outage/18161/1/

  • Date - 05/10/2010 11:53 - 05/10/2010 12:01
  • Last Updated - 06/10/2010 14:52
Timezone Changed (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Incorrect server time was reported. This was due to critical security patches and upgrades performed today. Investigating.

  • Date - 05/10/2010 16:01 - 06/10/2010 14:50
  • Last Updated - 05/10/2010 16:21
Unscheduled Outage (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • We are seeing some servers not responding at the Orcon datacenter. External monitoring reported down at 09:39 AM UTC. An emergency notice has been opened to determine the cause and resolve the issue.

    Update 1@0945 UTC: Data center reports switch has 'frozen' (while a VLAN was being changed).  They are restarting the switch (after 500+ days of uptime apparently even the Ciscos need a quick nap).  From this report is appears unrelated to the PDU/power issue earlier today.

    Update 10:01:23 AM UTC: Issue resolved, network restored.

  • Date - 15/09/2010 21:45 - 15/09/2010 22:01
  • Last Updated - 23/09/2010 12:01
Unscheduled Outage (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • We are seeing some servers not responding at the Orcon datacenter. External monitoring reported down at 10:06:00 PM UTC. We are not aware of any scheduled maintenance at the datacenter and an emergency notice has been opened to determine the cause and resolve the issue.

    Update: A technician at the datacenter is investigating this now.

    Update 11:03:19 PM UTC: A power supply issue has been identified. Servers are coming back up now.

    Update 11:05:45 PM UTC: Issue resolved.

  • Date - 15/09/2010 10:06 - 15/09/2010 11:06
  • Last Updated - 15/09/2010 22:02
Firewall Issue (Resolved)
  • Priority - High
  • Affecting Server - WESTMERE
  • External monitoring reported a firewall configuration problem at 10:58 NZST. This required a server reboot. [Completed]

  • Date - 20/07/2010 10:58 - 20/07/2010 11:10
  • Last Updated - 20/07/2010 12:06
Performance Tuning (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Performance tuning (Apache and MySQL) will be undertaken during business hours today. This has to be performed during "normal" operation so as to get the best results. No interuptions expected. [Completed]

  • Date - 01/07/2010 14:41
  • Last Updated - 01/07/2010 22:06
Web Pages Blocked (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Some customers have reported that certain NZ websites and network connections were being blocked between 18:30 - 19:30 NZST tonight. This could be due to a firewall appliance. We are following up with the Orcon datacenter to see if there are any inappropriate firewall rules in place that could be causing the issue. We endeavor to have it corrected.

    The datacenter reported back. Issue was with upstream security provider. This has now been resolved.

  • Date - 28/06/2010 20:19
  • Last Updated - 28/06/2010 23:46
Unscheduled Reboot (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • External monitoring reported a possible network failure at 20:02. We monitored the issue until 20:10. Server rebooted at 20:12 to bring services back online. Technicians are currently investigating the issue to determine the cause.

  • Date - 25/06/2010 20:14
  • Last Updated - 28/06/2010 12:26
Scheduled Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Scheduled server reboot for WESTMERE at 22:30 NZT (10:30 UTC). Expected downtime 2 min - 5 min. [Completed]

  • Date - 06/10/2020 22:30 - 06/10/2020 22:36
  • Last Updated - 22/06/2010 23:03
Reported /img /css folders missing (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • Some customers have reported missing /img and /css folders. This has been investigated and is due to a Parallels Plesk Panel related issue. We will revert all missing files from the latest backup as soon as possible. [Completed]

    Explanation:

    These are default files as part of the Parallels Plesk Panel standard setup that were installed to all virtual host root directories during a recent migration to this server. While attempting to remove any unnecessary files from virtual hosts, some legitimate files were accidentally deleted. This was due to a bug in the script. We have reverted all /img and /css folders to the httpdocs directories. If you do not need these files, you can safely delete them.

    For customers that have manually restored these folders via SFTP from their local copy, you will notice a folder named css_restored or img_restored so as not to overwrite your uploaded files.

  • Date - 21/06/2010 21:00 - 22/06/2010 23:00
  • Last Updated - 22/06/2010 23:00
Unscheduled Reboot (Resolved)
  • Priority - Critical
  • Affecting Server - WESTMERE
  • MySQL server experienced a temporary shutdown at 03:29 due to a disk space issue. Disk space allocation was increased which required a server reboot. MySQL was unavailable for 15 minutes. Disk allocation monitoring added to prvide advanced warning to prevent future issues.

  • Date - 19/06/2010 03:29 - 19/06/2010 03:45
  • Last Updated - 19/06/2010 23:09
Scheduled Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Scheduled reboot for WESTMERE at 15:30 NZT (03:30 UTC) for a Kernel upgrade.Expected downtime 2 min - 15 min (if file system check is required).

  • Date - 07/06/2010 15:15 - 07/06/2010 15:52
  • Last Updated - 07/06/2010 15:54
Scheduled Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • PLESK-WESTMERE is scheduled for a reboot between 01:25 NZST and 02:00 NZST to enable an important Kernel update. Downtime expected to be 1-5 minutes.

  • Date - 03/06/2010 01:23 - 07/06/2010 15:52
  • Last Updated - 03/06/2010 01:27
Apache and MySQL Restart (Resolved)
  • Priority - Medium
  • Affecting Server - WESTMERE
  • Performance changes will be made to MySQL server. This will require a brief restart of MySQL and Apache. No expected downtime.

  • Date - 06/10/2020 01:12 - 01/06/2010 00:00
  • Last Updated - 02/06/2010 15:26
Nameserver IP Change (Resolved)
  • Priority - Low
  • Affecting Server - WESTMERE
  • For improved redundancy, our primary and secondary nameserver IP's will be changed.

    Old:

    ns1.createhosting.net.nz 65.99.197.195

    ns2.createhosting.net.nz 119.47.117.129

     

    New:

    ns1.createhosting.net.nz  66.119.228.130

    ns2.createhosting.net.nz  60.234.72.71

     

    In most cases you won't need to do anything and the changeover should be transparent. If you have manually set the "glue" records (IP Addresses) for our nameservers in your domain's DNS, you may need to update these at your registrar. If your domain is under our management, this won't be necessary as we would have already made the change for you.

  • Date - 00/00/0000 - 02/06/2010 15:26
  • Last Updated - 31/05/2010 20:21
Server Migration (Resolved)
  • Priority - Low
  • Sites on PLESK-N will be progressively migrated to a new server for improved performance and redundancy over the next few days.

    You will be notified by mail on the day that your site is scheduled for migration. An email will be sent to the account contact person recorded in our billing system

    DNS SOA records (specifically TTL) are all set to 300 seconds in preparation for migration. This will mean that there will be a 5 minute switchover time during site transfer where your site may be unavailable.

    Migrations will be done as close as possible to off peak times to avoid disruption. Please contact us at support@createhosting.co.nz if you have any questions or concerns.

    Update 31/05/2010:

    All sites migrated.

  • Date - 20/05/2010 23:23 - 31/05/2010 00:00
  • Last Updated - 31/05/2010 20:13
Memory Upgrade (Resolved)
  • Priority - Medium
  • PLESK-N server memory will be upgraded. Downtime expected to be no longer than 5 minutes. [completed 22:35 - downtime 2 min]

  • Date - 20/05/2010 22:41 - 20/05/2010 23:20
  • Last Updated - 20/05/2010 23:20
SFTP Down (Resolved)
  • Priority - Low
  • Reported SFTP connection failure. This has now been resolved and was due to an upgrade of Parallels Plesk Panel today.

  • Date - 20/05/2010 23:18 - 20/05/2010 00:00
  • Last Updated - 20/05/2010 23:20
Hard Drive Failure (Resolved)
  • Priority - Critical
  • We are currently experiencing a possible hard drive failure on PLESK-N server. The drives are being tested, replaced and the RAID resynced. This will cause reduced performance and some sites may not be able to serve out during this time. We are working expediantly to fix the issue as soon as possible. We apologise for the inconvenience and hope to have the matter resolved as soon as possible.

    UPDATE 15:43

    This server is operational. Hard drives are in the process of being replaced (HOT SWAP). Performance will be degraded during RAID resync. [resync completed]

    UPDATE 20/05/2010 06:00

    Server will be rebooted. [completed]

    UPDATE 20/05/2010 11:32

    We are experiencing mail issues. Incoming and in some cases outgoing mail is affected. We are currently restoring all mail configurations to repair the issue. [completed]

    UPDATE 20/05/2010 13:06

    As a precautionary measure, we are upgrading PLESK to the latest version. No expected downtime or downtime expected to be minimal. [completed]

    UPDATE 20/05/2010 14:11

    All mail issues resolved. Some users have reported that their password is no longer being accepted. If this applies to you, please contact us at support@createhosting.co.nz and we will reset it for you.

  • Date - 19/05/2010 09:06 - 20/05/2010 00:00
  • Last Updated - 20/05/2010 22:40
Domain migrations (Resolved)
  • Priority - Medium
  • Affecting System - Server Upgrades
  • A number of domains and services will be individually migrated from PLESK-D to PLESK-N. No downtime is expected. SOA TTL (Time To Live) for all domains on PLESK-D have been set to 60 seconds to reduce likelyhood of any email problems and maintain near instananeous migration. Major chnages: PHP4 to PHP 5, MySQL4 to MySQL5, Centos4 to Centos5.

  • Date - 25/03/2010 00:00 - 19/05/2010 14:10
  • Last Updated - 26/03/2010 10:34
Parallels Plesk Panel Upgrade (Resolved)
  • Priority - Low
  • Affecting Server -
  • PLESK-D will be incrementally upgraded from 8.6.0 - 9.2.2. No downtime is expected.

  • Date - 25/03/2010 00:00 - 25/03/2010 00:00
  • Last Updated - 26/03/2010 10:28
DoS attack (Resolved)
  • Priority - Critical
  • There was a DoS attack this afternoon.

    A DoS attack on the network caused an outage between 3:30 PM and 3:40
    PM.  The attack was blockeded and service resumed as normal.  We will
    continue to monitor the situation.

  • Date - 23/03/2010 10:23 - 23/03/2010 00:00
  • Last Updated - 26/03/2010 10:24
Suboptimal Routing, BGP Session Reset (Resolved)
  • Priority - Critical
  • IMPACT OF WORK: Suboptimal Routing, BGP Session Reset
    DATE/TIME: 05:00 - 06:00 CST / 11:00 - 12:00 UTC January 28 - http://tinyurl.com/yagodhg

    Colo4 will performing maintenance on their border routers. Estimated period of impact is no more than five minutes per border router (of which there are two).

  • Date - 28/01/2010 00:29
  • Last Updated - 09/02/2010 14:31
Suboptimal Routing, BGP Session Reset (Resolved)
  • Priority - High
  • Affecting System - Border Routers
  • IMPACT OF WORK: Suboptimal Routing, BGP Session Reset
    DATE/TIME: 05:00 - 06:00 CST / 11:00 - 12:00 UTC January 28 - http://tinyurl.com/yagodhg

    Dallas Datacenter will performing maintenance on their border routers. Estimated period of impact is no more than five minutes per border router (of which there are two).

  • Date - 28/01/2010 00:00 - 29/01/2010 00:00
  • Last Updated - 27/01/2010 15:28
SFTP Connection Issues (Resolved)
  • Priority - Medium
  • SFTP connection issues have been reported for PLESK-N. If you experience difficulty connecting via SFTP, please wait a few seconds and try again. This is an intermittent problem and we will advise as soon as it has been resolved.

  • Date - 21/12/2009 16:32
  • Last Updated - 22/12/2009 13:43
DPS Issues (Resolved)
  • Priority - High
  • Affecting System - DPS Fail-proof notification
  • There is currently an issue with DPS Fail-proof result notifications coming from a different IP address than normal. This has been reported to cause repeat order confirmation emails and in some cases empty orders to be processed through some ecommerce storefronts.

    We are currently working with DPS to get the issue resolved as soon as possible.

  • Date - 06/10/2009 16:55 - 07/10/2009 11:30
  • Last Updated - 06/10/2009 17:06
Scheduled Reboot 9PM 02/10/2009 (Resolved)
  • Priority - High
  • PLESK-N will be rebooted as part of a kernel upgrade. PHP will be upgraded. Expected downtime 5 min.

  • Date - 02/10/2009 21:00 - 02/10/2009 21:59
  • Last Updated - 02/10/2009 21:27