<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>pushr Status - Incident history</title>
    <link>https://pushr.instatus.com</link>
    <description>pushr</description>
    <pubDate>Tue, 24 Feb 2026 08:00:00 +0000</pubDate>
    
<item>
  <title>Sonic Object Storage Maintenance</title>
  <description>
    Type: Maintenance
    Duration: 30 minutes

    Affected Components: Sonic Object Storage
    Feb 24, 08:00:00 GMT+0 - Identified - We are planning for a scheduled maintenance during that time. During the maintenance window, Sonic will temporarily be inaccessible.  Feb 24, 08:00:01 GMT+0 - Identified - Maintenance is now in progress Feb 24, 08:30:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 30 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a scheduled maintenance during that time. During the maintenance window, Sonic will temporarily be inaccessible. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 24 Feb 2026 08:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cmlzcumvg0dji737b8xdzbko5</link>
  <guid>https://pushr.instatus.com/maintenance/cmlzcumvg0dji737b8xdzbko5</guid>
</item>

<item>
  <title>DNS partial outage</title>
  <description>
    Type: Incident
    Duration: 31 minutes

    Affected Components: Anycast DNS
    Jul 22, 09:43:30 GMT+0 - Investigating - We are monitoring a partial DNS outage. We are currently investigating this incident.  Jul 22, 10:08:21 GMT+0 - Identified - we&#039;ve identified the cause and are now restoring the connectivity. Jul 22, 10:14:29 GMT+0 - Resolved - All connectivity has been restored. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 31 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:43:30&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are monitoring a partial DNS outage. We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:08:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  we&#039;ve identified the cause and are now restoring the connectivity..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:14:29&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  All connectivity has been restored..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 22 Jul 2025 09:43:30 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cmdeckbks003ed50qv2abgnyn</link>
  <guid>https://pushr.instatus.com/incident/cmdeckbks003ed50qv2abgnyn</guid>
</item>

<item>
  <title>Partial DNS connectivity outage</title>
  <description>
    Type: Incident
    Duration: 9 hours and 31 minutes

    
    Jul 16, 10:40:43 GMT+0 - Investigating - We&#039;ve identified that pushr&#039;s DNS IP addresses are not reachable from Cogent (AS174)&#039;s network. We&#039;ve contacted the NOC at Cogent and are waiting for their response. Unfortunately, the impact of this event is global and would affect users of ISPs that use Cogent as their upstream.  Jul 16, 20:12:04 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 9 hours and 31 minutes</p>
    
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:40:43&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;ve identified that pushr&#039;s DNS IP addresses are not reachable from Cogent (AS174)&#039;s network. We&#039;ve contacted the NOC at Cogent and are waiting for their response. Unfortunately, the impact of this event is global and would affect users of ISPs that use Cogent as their upstream. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:12:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 16 Jul 2025 10:40:43 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cmd5tyslg000u12vfvq23hjjf</link>
  <guid>https://pushr.instatus.com/incident/cmd5tyslg000u12vfvq23hjjf</guid>
</item>

<item>
  <title>Sonic object storage updates</title>
  <description>
    Type: Maintenance
    

    Affected Components: Sonic Object Storage
    Jun 19, 13:00:00 GMT+0 - Identified - The team will be applying updates to fix a compatibility issue with file uploads via Boto3 v1.36 and up, AWS Client v2.14.0 and up. Storage availability will be disrupted briefly and is expected to last for up to 5 minutes.  Jun 19, 12:43:21 GMT+0 - Completed - This maintenance has been canceled due to concerns with the compatibility of the planned updates. The team will reschedule for a future date. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The team will be applying updates to fix a compatibility issue with file uploads via Boto3 v1.36 and up, AWS Client v2.14.0 and up. Storage availability will be disrupted briefly and is expected to last for up to 5 minutes. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:43:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  This maintenance has been canceled due to concerns with the compatibility of the planned updates. The team will reschedule for a future date..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 19 Jun 2025 12:43:21 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cmc37z4hc0011103wh4yiy6hs</link>
  <guid>https://pushr.instatus.com/maintenance/cmc37z4hc0011103wh4yiy6hs</guid>
</item>

<item>
  <title>Sonic object storage update</title>
  <description>
    Type: Maintenance
    Duration: 5 minutes

    Affected Components: Sonic Object Storage
    Apr 10, 12:35:04 GMT+0 - Completed - Maintenance has completed successfully. Apr 10, 12:30:00 GMT+0 - Identified - We are planning a restart of object storage master servers to apply updates that improve S3 API compatibility. A brief period of inability (1-5 minutes) is expected. Apr 10, 12:30:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 5 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:35:04&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning a restart of object storage master servers to apply updates that improve S3 API compatibility. A brief period of inability (1-5 minutes) is expected..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 10&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:30:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 10 Apr 2025 12:30:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cm9bab0qd007inhsy7mgv7d46</link>
  <guid>https://pushr.instatus.com/maintenance/cm9bab0qd007inhsy7mgv7d46</guid>
</item>

<item>
  <title>Access router and switches maintenance</title>
  <description>
    Type: Maintenance
    Duration: 2 hours

    Affected Components: Sonic Object Storage
    Dec 2, 03:30:00 GMT+0 - Identified - Our data center provider will be carrying maintenance work on the core router and the connected switches which is expected to cause temporary disruption in network traffic to and from our Sonic object storage service. The maximum expected service unavailability is 2 hours. This maintenance will not affect the Sonic Quantum tier storage service. We apologise for any inconvenience and thank you for your patience. Dec 2, 05:30:00 GMT+0 - Completed - Maintenance has completed successfully Dec 2, 03:30:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 2 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our data center provider will be carrying maintenance work on the core router and the connected switches which is expected to cause temporary disruption in network traffic to and from our Sonic object storage service. The maximum expected service unavailability is 2 hours. This maintenance will not affect the Sonic Quantum tier storage service. We apologise for any inconvenience and thank you for your patience..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;05:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;03:30:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 2 Dec 2024 03:30:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cm3ik0ef6000ox198okcyyyng</link>
  <guid>https://pushr.instatus.com/maintenance/cm3ik0ef6000ox198okcyyyng</guid>
</item>

<item>
  <title>Urgent maintenance on datacenter equipment</title>
  <description>
    Type: Incident
    Duration: 12 minutes

    Affected Components: Dashboard, Sonic Object Storage
    Oct 24, 07:06:16 GMT+0 - Investigating - We&#039;ve been informed of an urgent maintenance on router(s) inside the data center where Pushr&#039;s storage and database servers are located. We have not observed any impact on our services so far but based on the provided information, this maintenance could lead to loss of connectivity. We will follow with updates as they come. Oct 24, 07:18:26 GMT+0 - Resolved - We&#039;ve been informed that this maintenance has been rescheduled for a different date. We will open a separate maintenance window for it. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 12 minutes</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:06:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;ve been informed of an urgent maintenance on router(s) inside the data center where Pushr&#039;s storage and database servers are located. We have not observed any impact on our services so far but based on the provided information, this maintenance could lead to loss of connectivity. We will follow with updates as they come..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:18:26&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We&#039;ve been informed that this maintenance has been rescheduled for a different date. We will open a separate maintenance window for it..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 24 Oct 2024 07:06:16 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cm2mymagd000i10cvcjhswi1n</link>
  <guid>https://pushr.instatus.com/incident/cm2mymagd000i10cvcjhswi1n</guid>
</item>

<item>
  <title>Dashboard degraded performance</title>
  <description>
    Type: Incident
    Duration: 2 hours and 12 minutes

    Affected Components: Dashboard
    Oct 14, 09:34:15 GMT+0 - Investigating - We are currently investigating a performance issue with the account dashboard which causes slow response and timeouts. Oct 14, 11:45:51 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 12 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:34:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating a performance issue with the account dashboard which causes slow response and timeouts..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:45:51&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 14 Oct 2024 09:34:15 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cm28ti2ie000m1upa5a6rvx2k</link>
  <guid>https://pushr.instatus.com/incident/cm28ti2ie000m1upa5a6rvx2k</guid>
</item>

<item>
  <title>Delayed Pull zones synchronisation</title>
  <description>
    Type: Incident
    Duration: 2 hours and 39 minutes

    Affected Components: CDN Edge
    Aug 31, 20:37:15 GMT+0 - Resolved - This incident has been resolved. Aug 31, 17:58:30 GMT+0 - Investigating - Pull zones are failing to sync to a part of our edge network. We are currently investigating this incident. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 39 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:37:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;17:58:30&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Pull zones are failing to sync to a part of our edge network. We are currently investigating this incident..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 31 Aug 2024 17:58:30 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cm0ig52b1005elfa294h8velb</link>
  <guid>https://pushr.instatus.com/incident/cm0ig52b1005elfa294h8velb</guid>
</item>

<item>
  <title>Dashboard offline</title>
  <description>
    Type: Incident
    Duration: 10 minutes

    Affected Components: Dashboard
    Aug 19, 21:17:32 GMT+0 - Monitoring - We implemented a fix and are currently monitoring the result. Aug 19, 21:23:24 GMT+0 - Resolved - This incident has been resolved. Aug 19, 21:13:27 GMT+0 - Investigating - Our customers dashboard has become unavailable. We are currently investigating this incident. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 10 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:17:32&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We implemented a fix and are currently monitoring the result..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:23:24&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:13:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Our customers dashboard has become unavailable. We are currently investigating this incident..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 19 Aug 2024 21:13:27 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cm01htjow000m6dsbjiykvaxm</link>
  <guid>https://pushr.instatus.com/incident/cm01htjow000m6dsbjiykvaxm</guid>
</item>

<item>
  <title>Sonic object storage gateways remapping</title>
  <description>
    Type: Maintenance
    Duration: 2 hours

    Affected Components: Sonic Object Storage
    Jul 31, 13:15:00 GMT+0 - Identified - As part of an infrastructure upgrade we will be remapping FTP, S3 and web file manager hostnames to new IP addresses. This maintenance is expected to cause a short temporary discrepancy between content that is being uploaded, content that is shown when listing buckets and folders, as well as availability of newly uploaded content during the maintenance window. The team will be actively synchronising metadata between the old and the new infrastructure while the remap is propagating to minimize the impact. Data that is already in our edge cache will not be affected by this maintenance. FTP users over SSL/TLS might need to accept new certificates depending on the FTP clients used.  Jul 31, 15:15:00 GMT+0 - Completed - Maintenance has completed successfully Jul 31, 13:15:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 2 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  As part of an infrastructure upgrade we will be remapping FTP, S3 and web file manager hostnames to new IP addresses. This maintenance is expected to cause a short temporary discrepancy between content that is being uploaded, content that is shown when listing buckets and folders, as well as availability of newly uploaded content during the maintenance window. The team will be actively synchronising metadata between the old and the new infrastructure while the remap is propagating to minimize the impact. Data that is already in our edge cache will not be affected by this maintenance. FTP users over SSL/TLS might need to accept new certificates depending on the FTP clients used. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:15:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 31&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:15:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 31 Jul 2024 13:15:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clz9u3e2b182623ihn2e2ic88ih</link>
  <guid>https://pushr.instatus.com/maintenance/clz9u3e2b182623ihn2e2ic88ih</guid>
</item>

<item>
  <title>Transcoding nodes maintenance</title>
  <description>
    Type: Maintenance
    Duration: 4 hours

    Affected Components: Media Platform GPU Cloud
    Jul 9, 22:00:00 GMT+0 - Completed - Maintenance has completed successfully Jul 9, 18:00:00 GMT+0 - Identified - We are planning for a scheduled maintenance of transcoding servers inside the Media Platform cluster during that time. This maintenance is potentially service impacting and may lead to unavailability of active live streams. Jul 9, 18:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 4 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a scheduled maintenance of transcoding servers inside the Media Platform cluster during that time. This maintenance is potentially service impacting and may lead to unavailability of active live streams..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 9 Jul 2024 18:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clycyul99129095ipoi3czjt62c</link>
  <guid>https://pushr.instatus.com/maintenance/clycyul99129095ipoi3czjt62c</guid>
</item>

<item>
  <title>Sonic volumes vacuum</title>
  <description>
    Type: Maintenance
    Duration: 3 days, 1 hour and 47 minutes

    Affected Components: Sonic Object Storage
    Jun 21, 07:00:00 GMT+0 - Identified - We will be purging 550TB of recently deleted data from the cluster during this maintenance window. Volume vacuuming is usually done at regular intervals and has no impact on Sonic performance, but due to the large amount of data that needs to be removed we have decided to lock the cluster and execute the vacuuming manually. We expect this to cause temporary performance degradation in reads and writes from/to Sonic. Cached content on our network edge will not be affected. During the lock all read and write operations will still be possible, but newly uploaded content will not be erasure coded until we release the lock. Please note that durability SLA will not be applied to these files, until the locks are removed and initial EC operation completes. We advise customers to keep local copies of the content uploaded during this maintenance window until this maintenance window is closed and erasure coding is re-enabled. Thank you for your understanding. Jun 21, 07:00:01 GMT+0 - Identified - Maintenance is now in progress Jun 24, 08:46:56 GMT+0 - Completed - Maintenance has completed successfully. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 days, 1 hour and 47 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be purging 550TB of recently deleted data from the cluster during this maintenance window. Volume vacuuming is usually done at regular intervals and has no impact on Sonic performance, but due to the large amount of data that needs to be removed we have decided to lock the cluster and execute the vacuuming manually. We expect this to cause temporary performance degradation in reads and writes from/to Sonic. Cached content on our network edge will not be affected. During the lock all read and write operations will still be possible, but newly uploaded content will not be erasure coded until we release the lock. Please note that durability SLA will not be applied to these files, until the locks are removed and initial EC operation completes. We advise customers to keep local copies of the content uploaded during this maintenance window until this maintenance window is closed and erasure coding is re-enabled. Thank you for your understanding..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 24&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:46:56&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 21 Jun 2024 07:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clxn8i76d7459bdoltanerp39</link>
  <guid>https://pushr.instatus.com/maintenance/clxn8i76d7459bdoltanerp39</guid>
</item>

<item>
  <title>SSL certificates not issued for new hostnames</title>
  <description>
    Type: Incident
    Duration: 1 day, 20 hours and 47 minutes

    Affected Components: Dashboard
    Jun 17, 15:55:53 GMT+0 - Investigating - We see that newly created hostnames fail to obtain their SSL certificates, resulting in failure of enabling HTTPS. We are currently investigating this incident. Jun 19, 12:43:01 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 day, 20 hours and 47 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 17&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:55:53&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We see that newly created hostnames fail to obtain their SSL certificates, resulting in failure of enabling HTTPS. We are currently investigating this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:43:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 17 Jun 2024 15:55:53 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clxj5qhnl9353bsloick9p33u</link>
  <guid>https://pushr.instatus.com/incident/clxj5qhnl9353bsloick9p33u</guid>
</item>

<item>
  <title>Dallas DC outage</title>
  <description>
    Type: Incident
    Duration: 9 hours and 9 minutes

    Affected Components: CDN Edge
    May 26, 12:04:26 GMT+0 - Identified - Our Dallas location is currently offline due to the data center being hit by a Tornado earlier today. Traffic is rerouted to nearby locations and will be served from there until power lines and connectivity can be restored. May 26, 21:13:42 GMT+0 - Resolved - This incident has been resolved and the facility is operational again. We are routing traffic for this region back to this PoP. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 9 hours and 9 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:04:26&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our Dallas location is currently offline due to the data center being hit by a Tornado earlier today. Traffic is rerouted to nearby locations and will be served from there until power lines and connectivity can be restored..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 26&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:13:42&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved and the facility is operational again. We are routing traffic for this region back to this PoP..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sun, 26 May 2024 12:04:26 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clwnhs3pz201422axoanckqi1xk</link>
  <guid>https://pushr.instatus.com/incident/clwnhs3pz201422axoanckqi1xk</guid>
</item>

<item>
  <title>Sonic S3 gateway errors</title>
  <description>
    Type: Incident
    Duration: 13 hours and 11 minutes

    Affected Components: Sonic Object Storage
    May 25, 07:44:52 GMT+0 - Investigating - We are aware of gateway timeouts on Sonic object storage. We are currently investigating this incident.  May 25, 08:46:46 GMT+0 - Identified - We&#039;ve identified the cause of the errors and have restored the operational state of the gateway. The team is now starting to work on a permanent solution to the issue. May 25, 09:49:18 GMT+0 - Identified - Work on a permanent solution continues. At present some uploads towards Sonic might continue to fail. Updates will follow. May 25, 11:53:59 GMT+0 - Monitoring - We&#039;ve deployed a permanent fix and we observe a drop in upload errors to the S3 gateway. We expect error counts to reach zero within 30 minutes. Customers are advised to check the integrity of the files uploaded during this partial outage as some may actually be 0 bytes in size. We continue to monitor the situation and are prepared to resume work on this incident should the applied fixes are not enough to remedy the issue. No data has been lost during this incident.  May 25, 20:55:59 GMT+0 - Resolved - We consider this incident to be resolved. Since the last update there have been zero upload errors logged, and Sonic has been fully operational. We&#039;ve also taken note of the fact that Sonic was not returning an error when uploads failed and resulted in 0 byte files. While failure due to the same circumstances as in this incident should no longer be possible in the future, we will be issuing another patch to address proper handling of such type of events.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 13 hours and 11 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:44:52&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are aware of gateway timeouts on Sonic object storage. We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:46:46&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;ve identified the cause of the errors and have restored the operational state of the gateway. The team is now starting to work on a permanent solution to the issue..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:49:18&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Work on a permanent solution continues. At present some uploads towards Sonic might continue to fail. Updates will follow..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:53:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We&#039;ve deployed a permanent fix and we observe a drop in upload errors to the S3 gateway. We expect error counts to reach zero within 30 minutes. Customers are advised to check the integrity of the files uploaded during this partial outage as some may actually be 0 bytes in size. We continue to monitor the situation and are prepared to resume work on this incident should the applied fixes are not enough to remedy the issue. No data has been lost during this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:55:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We consider this incident to be resolved. Since the last update there have been zero upload errors logged, and Sonic has been fully operational. We&#039;ve also taken note of the fact that Sonic was not returning an error when uploads failed and resulted in 0 byte files. While failure due to the same circumstances as in this incident should no longer be possible in the future, we will be issuing another patch to address proper handling of such type of events. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 25 May 2024 07:44:52 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clwlt2fzk23097bboqsg8iajd7</link>
  <guid>https://pushr.instatus.com/incident/clwlt2fzk23097bboqsg8iajd7</guid>
</item>

<item>
  <title>Sonic web file manager downtime</title>
  <description>
    Type: Incident
    Duration: 25 days, 22 hours and 54 minutes

    Affected Components: Dashboard
    Feb 7, 11:00:22 GMT+0 - Investigating - We&#039;ve disabled access to the web file manager of our Sonic object storage service temporarily as we investigate the cause of failing file uploads reported by some customers.  Feb 7, 15:22:27 GMT+0 - Monitoring - We implemented a fix and are currently monitoring the result. Sonic&#039;s web file manager is operational again. May 2, 09:54:27 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 25 days, 22 hours and 54 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:00:22&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;ve disabled access to the web file manager of our Sonic object storage service temporarily as we investigate the cause of failing file uploads reported by some customers. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:22:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We implemented a fix and are currently monitoring the result. Sonic&#039;s web file manager is operational again..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 2&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:54:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 7 Feb 2024 11:00:22 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clsbohvce163965bjn0crwodr12</link>
  <guid>https://pushr.instatus.com/incident/clsbohvce163965bjn0crwodr12</guid>
</item>

<item>
  <title>Amsterdam connectivity interruption</title>
  <description>
    Type: Incident
    Duration: 9 hours and 49 minutes

    Affected Components: Sonic Object Storage
    Jan 8, 09:52:16 GMT+0 - Resolved - This incident has been resolved. All services in Amsterdam are operational again. Jan 8, 00:03:35 GMT+0 - Investigating - We’ve lost connectivity in our Amsterdam edge. Sonic’s web file manager has also beem affected. We are currently investigating this incident.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 9 hours and 49 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:52:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved. All services in Amsterdam are operational again..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:03:35&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We’ve lost connectivity in our Amsterdam edge. Sonic’s web file manager has also beem affected. We are currently investigating this incident. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 8 Jan 2024 00:03:35 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clr45to5799198bin9627h1a8a</link>
  <guid>https://pushr.instatus.com/incident/clr45to5799198bin9627h1a8a</guid>
</item>

<item>
  <title>New York edge maintenance</title>
  <description>
    Type: Maintenance
    Duration: 3 hours

    Affected Components: CDN Edge
    Dec 20, 16:30:00 GMT+0 - Completed - Maintenance has completed successfully Dec 20, 13:30:00 GMT+0 - Identified - We are planning for a scheduled maintenance during which we will be switching network providers in our New York edge. To avoid service disruptions, we are starting to drain traffic from this location, while all requests will be routed to the next best nearby location (Toronto, Chicago). Dec 20, 13:30:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 3 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a scheduled maintenance during which we will be switching network providers in our New York edge. To avoid service disruptions, we are starting to drain traffic from this location, while all requests will be routed to the next best nearby location (Toronto, Chicago)..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:30:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 20 Dec 2023 13:30:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clqcv8cgx100789bhn9pji1soyu</link>
  <guid>https://pushr.instatus.com/maintenance/clqcv8cgx100789bhn9pji1soyu</guid>
</item>

<item>
  <title>Sonic partial unavailability</title>
  <description>
    Type: Incident
    Duration: 13 days, 22 hours and 32 minutes

    Affected Components: Sonic Object Storage
    Nov 20, 14:34:55 GMT+0 - Investigating - Our Sonic cluster has lost connectivity to 11 disk drives. Files stored on them may be unavailable for fresh content that has not yet been erasure coded. We are currently investigating this incident.  Nov 20, 16:40:31 GMT+0 - Identified - We’ve identified the issue. A patch is being prepared that should fix the issue. We’ll be working on this non-stop until fully resolved. Next update will come in a few hours. Nov 20, 20:56:53 GMT+0 - Identified - We are continuing to work on a fix for this incident. A 2-step patch is being applied. In step 1 we&#039;ve reconnected the offline drives and content previously unavailable is now online. In step 2 we are addressing the root cause of the issue which so far is believed to be related to temporary loss in connectivity between some servers in the cluster. This connectivity loss is also believed to be the cause of another issue that we&#039;ve received reports for today - some newly uploaded files may be returning HTTP5XX errors upon download. We are still attempting to confirm the link between the two issues and will be holding back step 2 until we have better understanding of the second issue.  Nov 20, 21:28:18 GMT+0 - Identified - We&#039;ve managed to resolve the temporary HTTP5xx errors on newly uploaded content. Step 2 of the patch will now be applied, and it should provide a permanent fix for the drives unavailability issue, but will not provide a permanent fix for the HTTP5xx issue. We will keep this incident open, but Sonic&#039;s state is now back to fully operational. Jan 4, 13:07:01 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 13 days, 22 hours and 32 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:34:55&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Our Sonic cluster has lost connectivity to 11 disk drives. Files stored on them may be unavailable for fresh content that has not yet been erasure coded. We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:40:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We’ve identified the issue. A patch is being prepared that should fix the issue. We’ll be working on this non-stop until fully resolved. Next update will come in a few hours..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:56:53&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are continuing to work on a fix for this incident. A 2-step patch is being applied. In step 1 we&#039;ve reconnected the offline drives and content previously unavailable is now online. In step 2 we are addressing the root cause of the issue which so far is believed to be related to temporary loss in connectivity between some servers in the cluster. This connectivity loss is also believed to be the cause of another issue that we&#039;ve received reports for today - some newly uploaded files may be returning HTTP5XX errors upon download. We are still attempting to confirm the link between the two issues and will be holding back step 2 until we have better understanding of the second issue. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:28:18&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;ve managed to resolve the temporary HTTP5xx errors on newly uploaded content. Step 2 of the patch will now be applied, and it should provide a permanent fix for the drives unavailability issue, but will not provide a permanent fix for the HTTP5xx issue. We will keep this incident open, but Sonic&#039;s state is now back to fully operational..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 4&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:07:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 20 Nov 2023 14:34:55 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clp70dhay52487bjmv3phlkzgo</link>
  <guid>https://pushr.instatus.com/incident/clp70dhay52487bjmv3phlkzgo</guid>
</item>

<item>
  <title>CDN zones sync issue</title>
  <description>
    Type: Incident
    Duration: 5 days, 19 hours and 54 minutes

    Affected Components: CDN Edge
    Oct 19, 12:42:51 GMT+0 - Investigating - We are experiencing issues with zone sync across our edge network. We are currently investigating this incident.  Oct 19, 12:58:58 GMT+0 - Identified - We&#039;ve traced the issue down to a malfunction in our origin shield implementation. We&#039;ve temporarily disabled the origin shields on pull zones to allow proper edge synchronisation. A permanent fix is coming. Oct 25, 08:36:23 GMT+0 - Resolved - Closing this incident as resolved, and we continue to work on permanent fix.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 5 days, 19 hours and 54 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:42:51&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are experiencing issues with zone sync across our edge network. We are currently investigating this incident. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:58:58&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;ve traced the issue down to a malfunction in our origin shield implementation. We&#039;ve temporarily disabled the origin shields on pull zones to allow proper edge synchronisation. A permanent fix is coming..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:36:23&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Closing this incident as resolved, and we continue to work on permanent fix. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 19 Oct 2023 12:42:51 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clnx6a30a3121azojsulfxnjq</link>
  <guid>https://pushr.instatus.com/incident/clnx6a30a3121azojsulfxnjq</guid>
</item>

<item>
  <title>Anti-DDoS shield deployment [DNS infrastructure]</title>
  <description>
    Type: Maintenance
    Duration: 1 day

    Affected Components: Anycast DNS
    Oct 12, 16:00:00 GMT+0 - Completed - Maintenance has completed successfully Oct 11, 16:00:00 GMT+0 - Identified - The team will start rolling out our custom protection system against denial/distributed denial of service attacks on PUSHR&#039;s DNS edge. We do not expect downtime but during the update, each DNS server will temporarily stop announcing our DNS prefix to the internet. This may lead to suboptimal latency in each location during the few minutes needed for the update to take place. Updates will be done one-by-one and will continue over the course of the next 24 hours to avoid any interruptions. In a second step, the team will start deploying the same update to our edge locations. This will be considered a separate maintenance window and will be announced at a later time. Oct 11, 16:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 day</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The team will start rolling out our custom protection system against denial/distributed denial of service attacks on PUSHR&#039;s DNS edge. We do not expect downtime but during the update, each DNS server will temporarily stop announcing our DNS prefix to the internet. This may lead to suboptimal latency in each location during the few minutes needed for the update to take place. Updates will be done one-by-one and will continue over the course of the next 24 hours to avoid any interruptions. In a second step, the team will start deploying the same update to our edge locations. This will be considered a separate maintenance window and will be announced at a later time..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 11 Oct 2023 16:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clnlql6fg71608bdoqxa87tyaq</link>
  <guid>https://pushr.instatus.com/maintenance/clnlql6fg71608bdoqxa87tyaq</guid>
</item>

<item>
  <title>Zone deployment errors</title>
  <description>
    Type: Incident
    Duration: 2 hours and 11 minutes

    Affected Components: CDN Edge
    Aug 25, 06:26:18 GMT+0 - Investigating - We are currently investigating an issue that is causing newly created CDN zones to fail to deploy across the edge network. Aug 25, 08:37:00 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 hours and 11 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;06:26:18&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating an issue that is causing newly created CDN zones to fail to deploy across the edge network..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:37:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 25 Aug 2023 06:26:18 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cllq7m06622021bfom0zd5f7ea</link>
  <guid>https://pushr.instatus.com/incident/cllq7m06622021bfom0zd5f7ea</guid>
</item>

<item>
  <title>Singapore power maintenance</title>
  <description>
    Type: Maintenance
    Duration: 9 hours

    Affected Components: CDN Edge
    Jul 29, 11:00:00 GMT+0 - Identified - Our supplier in Singapore will be performing a maintenance on a power feed in the data centre. During this maintenance we will power off the servers in this location to avoid power outage related risks on the equipment. Traffic will be redirected to Hong Kong from where it will be served until the maintenance is over. No service unavailability is expected. Jul 29, 11:00:01 GMT+0 - Identified - Maintenance is now in progress Jul 29, 20:00:00 GMT+0 - Completed - Maintenance has completed successfully 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 9 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our supplier in Singapore will be performing a maintenance on a power feed in the data centre. During this maintenance we will power off the servers in this location to avoid power outage related risks on the equipment. Traffic will be redirected to Hong Kong from where it will be served until the maintenance is over. No service unavailability is expected..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 29 Jul 2023 11:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clk860pjv694468wkof59h6z1jx</link>
  <guid>https://pushr.instatus.com/maintenance/clk860pjv694468wkof59h6z1jx</guid>
</item>

<item>
  <title>Amsterdam network upgrade</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 17 minutes

    Affected Components: CDN Edge
    Jul 21, 14:46:34 GMT+0 - Completed - Maintenance has completed successfully. Jul 21, 13:30:00 GMT+0 - Identified - We are preparing a network upgrade in Amsterdam. Traffic will be rerouted to nearby edge locations and no service disruption is expected.  Jul 21, 13:30:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 17 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:46:34&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are preparing a network upgrade in Amsterdam. Traffic will be rerouted to nearby edge locations and no service disruption is expected. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:30:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 21 Jul 2023 13:30:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clkccoh17620250caoch0jv9u5u</link>
  <guid>https://pushr.instatus.com/maintenance/clkccoh17620250caoch0jv9u5u</guid>
</item>

<item>
  <title>Sonic degraded performance</title>
  <description>
    Type: Incident
    Duration: 3 hours and 3 minutes

    Affected Components: Sonic Object Storage
    Jul 19, 09:00:03 GMT+0 - Investigating - We are currently investigating the cause of abnormally high disk I/O on Sonic&#039;s master nodes.  Jul 19, 09:33:13 GMT+0 - Monitoring - We implemented a fix and are currently monitoring the result.  Jul 19, 12:03:30 GMT+0 - Resolved - This incident has been resolved and we are now preparing to push a permanent fix. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 3 hours and 3 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:00:03&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating the cause of abnormally high disk I/O on Sonic&#039;s master nodes. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:33:13&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We implemented a fix and are currently monitoring the result. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:03:30&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved and we are now preparing to push a permanent fix..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 19 Jul 2023 09:00:03 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clk9ht752293344bool6fi361xl</link>
  <guid>https://pushr.instatus.com/incident/clk9ht752293344bool6fi361xl</guid>
</item>

<item>
  <title>Dashboard partial outage</title>
  <description>
    Type: Incident
    Duration: 35 minutes

    Affected Components: Dashboard
    Jul 19, 08:57:37 GMT+0 - Investigating - We are observing degraded dashboard performance with occasional timeouts leading to partial outages for some users. We are currently investigating the cause for this issue. Jul 19, 09:32:40 GMT+0 - Resolved - This incident has been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 35 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:57:37&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are observing degraded dashboard performance with occasional timeouts leading to partial outages for some users. We are currently investigating the cause for this issue..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jul &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:32:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 19 Jul 2023 08:57:37 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clk9hq2va385796beol4ta2od5x</link>
  <guid>https://pushr.instatus.com/incident/clk9hq2va385796beol4ta2od5x</guid>
</item>

<item>
  <title>Sonic Object Storage system updates</title>
  <description>
    Type: Maintenance
    Duration: 24 minutes

    Affected Components: Sonic Object Storage
    Jun 23, 11:53:41 GMT+0 - Completed - Maintenance has completed successfully and all APIs and services are available again. Jun 23, 11:30:01 GMT+0 - Identified - Maintenance is now in progress Jun 23, 11:30:00 GMT+0 - Identified - We are planning for a scheduled system update on Sonic&#039;s master nodes. To apply the updates, we will then restart the affected services. This event is expect to cause up to 60 seconds of unavailability of the S3 API, FTP uploads, and the web file manager. Content already stored on Sonic will remain accessible without interruptions.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 24 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:53:41&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully and all APIs and services are available again..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:30:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:30:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are planning for a scheduled system update on Sonic&#039;s master nodes. To apply the updates, we will then restart the affected services. This event is expect to cause up to 60 seconds of unavailability of the S3 API, FTP uploads, and the web file manager. Content already stored on Sonic will remain accessible without interruptions. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 23 Jun 2023 11:30:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clj8e7xlk244348bcotvwhr6bcn</link>
  <guid>https://pushr.instatus.com/maintenance/clj8e7xlk244348bcotvwhr6bcn</guid>
</item>

<item>
  <title>Billing system maintenance</title>
  <description>
    Type: Maintenance
    Duration: 15 days, 22 hours and 20 minutes

    Affected Components: Dashboard
    Jun 7, 13:04:34 GMT+0 - Identified - Due to the need of significant code edits this maintenance window has been expanded until further notice. Updates will follow. We do not expect any service disruptions while work is ongoing. Jun 7, 11:21:00 GMT+0 - Identified - During this maintenance we will be updating some parts of the billing system to address an issue which has been discovered to affect zones using the Core network tier. Those who are affected might have seen discrepancies in between used traffic and billed traffic when switching from Standard tier to Core tier. Billing will be paused for zones on the Core tier during this maintenance window. Jun 23, 09:41:29 GMT+0 - Completed - Maintenance has completed successfully. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 15 days, 22 hours and 20 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:04:34&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Due to the need of significant code edits this maintenance window has been expanded until further notice. Updates will follow. We do not expect any service disruptions while work is ongoing..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 7&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:21:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  During this maintenance we will be updating some parts of the billing system to address an issue which has been discovered to affect zones using the Core network tier. Those who are affected might have seen discrepancies in between used traffic and billed traffic when switching from Standard tier to Core tier. Billing will be paused for zones on the Core tier during this maintenance window..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 23&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:41:29&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 7 Jun 2023 11:21:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clilmbswd5759b3oey1i7vzh5</link>
  <guid>https://pushr.instatus.com/maintenance/clilmbswd5759b3oey1i7vzh5</guid>
</item>

<item>
  <title>Amsterdam Edge &amp; SFS file manager outage</title>
  <description>
    Type: Incident
    Duration: 10 hours and 33 minutes

    
    May 8, 20:35:30 GMT+0 - Investigating - We are currently investigating a complete outage in Amsterdam. Traffic towards our edge in this region has been automatically rerouted to London. The web file manager for out deprecated storage service - SFS - is also reliant on this infrastructure and it is currently unavailable. Updates will follow. May 8, 21:00:20 GMT+0 - Identified - This issue is related to a power outage in the data centre where PUSHR&#039;s infrastructure is located. It might take up to 8 hours for this issue to be resolved. 
-- Note: The SFS service continues to accept uploads and to serve content via all other methods but web file manager.  May 9, 07:08:41 GMT+0 - Resolved - This incident has been resolved and Amsterdam is back online. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 10 hours and 33 minutes</p>
    
    &lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:35:30&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating a complete outage in Amsterdam. Traffic towards our edge in this region has been automatically rerouted to London. The web file manager for out deprecated storage service - SFS - is also reliant on this infrastructure and it is currently unavailable. Updates will follow..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:00:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This issue is related to a power outage in the data centre where PUSHR&#039;s infrastructure is located. It might take up to 8 hours for this issue to be resolved. 
-- Note: The SFS service continues to accept uploads and to serve content via all other methods but web file manager. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;May &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;07:08:41&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved and Amsterdam is back online..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 8 May 2023 20:35:30 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clhfay8f4117350ulmzoh2ijpb9</link>
  <guid>https://pushr.instatus.com/incident/clhfay8f4117350ulmzoh2ijpb9</guid>
</item>

<item>
  <title>Sonic object storage - Master nodes reboot</title>
  <description>
    Type: Maintenance
    Duration: 31 minutes

    Affected Components: Sonic Object Storage
    Mar 25, 18:00:00 GMT+0 - Identified - The team is preparing the master nodes in the cluster for a hardware reset, in our continuous attempts to resolve the issue which causes random temporary unavailability on file uploads and the S3 API. During the last maintenance window on March 22, a complete replacement of the hardware of a single master node was done, which ruled out faulty hardware component(s) as the root cause. Based on all collected information available from testing, available system logs and known specific bugs related to the AMD platform (on which Sonic is built), we will perform this reset to load the systems&#039; kernel with the &quot;iommu=pt&quot; flag. This will allow us to pass through AMD&#039;s technology which enables virtualisation of I/O resources (AMD-Vi). Should this attempt fail at resolving the issue, a decision has been made to initiate a switch of all master nodes to a different type of servers powered by Intel.

The expected unavailability during this maintenance window is 15 minutes. It could be extended in the event that we hit a boot issue with the kernel flag enabled and need to revert the configuration.  Mar 25, 18:31:15 GMT+0 - Completed - Reboot with new kernel flag has completed successfully and all services have been re-enabled. We will now continue monitoring. If this issue persists we will initiate the switch towards different hardware as mentioned in the description of this maintenance window. This is the last known issue that keeps Sonic back from exiting the beta stage. Mar 25, 18:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 31 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The team is preparing the master nodes in the cluster for a hardware reset, in our continuous attempts to resolve the issue which causes random temporary unavailability on file uploads and the S3 API. During the last maintenance window on March 22, a complete replacement of the hardware of a single master node was done, which ruled out faulty hardware component(s) as the root cause. Based on all collected information available from testing, available system logs and known specific bugs related to the AMD platform (on which Sonic is built), we will perform this reset to load the systems&#039; kernel with the &quot;iommu=pt&quot; flag. This will allow us to pass through AMD&#039;s technology which enables virtualisation of I/O resources (AMD-Vi). Should this attempt fail at resolving the issue, a decision has been made to initiate a switch of all master nodes to a different type of servers powered by Intel.

The expected unavailability during this maintenance window is 15 minutes. It could be extended in the event that we hit a boot issue with the kernel flag enabled and need to revert the configuration. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:31:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Reboot with new kernel flag has completed successfully and all services have been re-enabled. We will now continue monitoring. If this issue persists we will initiate the switch towards different hardware as mentioned in the description of this maintenance window. This is the last known issue that keeps Sonic back from exiting the beta stage..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;18:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 25 Mar 2023 18:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clfo56e222056726lncr81qokqy</link>
  <guid>https://pushr.instatus.com/maintenance/clfo56e222056726lncr81qokqy</guid>
</item>

<item>
  <title>Sonic Object Storage - Hardware replacement</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 51 minutes

    Affected Components: Sonic Object Storage
    Mar 22, 15:00:00 GMT+0 - Identified - We continue to observe instability in one of the master nodes in the Sonic cluster which leads to connectivity loss. After two unsuccessful attempts at fixing the issue, we&#039;ve decided to replace the entire server in order to rule out the majority of possible hardware errors, and to possibly avoid the need for further maintenance windows and downtime. 
At 1700 CET today the master node will be shut down, and it&#039;s disk drives will be moved to a new server. During this event, uploads will be disabled and the S3 API will be temporarily shut down. Availability of some of the stored objects will be impacted too. We do not expect any data loss. The procedure is expected to take up to 90 minutes until services are restored.  Mar 22, 16:03:52 GMT+0 - Identified - Data center technicians have now begun the process of moving the disk drives to a new server. Mar 22, 15:00:01 GMT+0 - Identified - Maintenance is now in progress Mar 22, 15:04:17 GMT+0 - Identified - We are now starting to drain existing upload connections in preparation to take a metadata snapshot. New upload connections (FTP) will be blocked, while existing ones will be allowed to finish. S3 API will be shutdown shortly. Mar 22, 16:50:43 GMT+0 - Completed - We&#039;ve migrated all disk drives successfully. All services have been re-enabled. This maintenance is now over and we are monitoring for errors. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 51 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We continue to observe instability in one of the master nodes in the Sonic cluster which leads to connectivity loss. After two unsuccessful attempts at fixing the issue, we&#039;ve decided to replace the entire server in order to rule out the majority of possible hardware errors, and to possibly avoid the need for further maintenance windows and downtime. 
At 1700 CET today the master node will be shut down, and it&#039;s disk drives will be moved to a new server. During this event, uploads will be disabled and the S3 API will be temporarily shut down. Availability of some of the stored objects will be impacted too. We do not expect any data loss. The procedure is expected to take up to 90 minutes until services are restored. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:03:52&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Data center technicians have now begun the process of moving the disk drives to a new server..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:04:17&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are now starting to drain existing upload connections in preparation to take a metadata snapshot. New upload connections (FTP) will be blocked, while existing ones will be allowed to finish. S3 API will be shutdown shortly..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 22&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;16:50:43&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  We&#039;ve migrated all disk drives successfully. All services have been re-enabled. This maintenance is now over and we are monitoring for errors..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 22 Mar 2023 15:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clfjl6d0i326027hfoluwsie9cs</link>
  <guid>https://pushr.instatus.com/maintenance/clfjl6d0i326027hfoluwsie9cs</guid>
</item>

<item>
  <title>SFS service unavailability </title>
  <description>
    Type: Incident
    Duration: 32 minutes

    
    Mar 13, 08:18:45 GMT+0 - Investigating - Our legacy storage system - SFS - is currently unavailable. We are investigating the cause.  Mar 13, 08:50:57 GMT+0 - Resolved - This incident has been resolved and connectivity has been restored.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 32 minutes</p>
    
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:18:45&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  Our legacy storage system - SFS - is currently unavailable. We are investigating the cause. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:50:57&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved and connectivity has been restored. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 13 Mar 2023 08:18:45 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clf6jz1w3133235x1ob5s55cinb</link>
  <guid>https://pushr.instatus.com/incident/clf6jz1w3133235x1ob5s55cinb</guid>
</item>

<item>
  <title>Support desk slower response times</title>
  <description>
    Type: Maintenance
    Duration: 13 days, 17 hours and 22 minutes

    Affected Components: Dashboard
    Mar 13, 08:54:27 GMT+0 - Completed -  Feb 27, 15:32:13 GMT+0 - Identified - We would like to notify all customers that due to internal structure changes in our team, we expect to be slower than usual to respond to support tickets and live chat requests in the following days. We expect the get back to normal operation by early next week. We thank you for your understanding and your patience during this period. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 13 days, 17 hours and 22 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 13&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;08:54:27&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:32:13&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We would like to notify all customers that due to internal structure changes in our team, we expect to be slower than usual to respond to support tickets and live chat requests in the following days. We expect the get back to normal operation by early next week. We thank you for your understanding and your patience during this period..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Mon, 27 Feb 2023 15:32:13 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clemzfwce4825gzot3fxp8w9t</link>
  <guid>https://pushr.instatus.com/maintenance/clemzfwce4825gzot3fxp8w9t</guid>
</item>

<item>
  <title>Sonic Object Storage stability updates</title>
  <description>
    Type: Maintenance
    Duration: 36 minutes

    Affected Components: Sonic Object Storage
    Feb 16, 14:00:00 GMT+0 - Identified - Our team will be applying an update to the master nodes in the storage cluster to fix a runtime error that was noticed earlier today. Since this error affects the accessibility of the files in the cluster, we will be pushing this update on short notice. Expected unavailability of uncached content is 30 minutes. Feb 16, 14:35:43 GMT+0 - Completed - Maintenance has been completed successfully. All storage services have been resumed.  Feb 16, 14:00:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 36 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Our team will be applying an update to the master nodes in the storage cluster to fix a runtime error that was noticed earlier today. Since this error affects the accessibility of the files in the cluster, we will be pushing this update on short notice. Expected unavailability of uncached content is 30 minutes..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:35:43&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has been completed successfully. All storage services have been resumed. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 16 Feb 2023 14:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cle6zldr797449rxmxayess1zr</link>
  <guid>https://pushr.instatus.com/maintenance/cle6zldr797449rxmxayess1zr</guid>
</item>

<item>
  <title>SSL certificates and mail system downtime</title>
  <description>
    Type: Incident
    Duration: 5 minutes

    Affected Components: Dashboard
    Jan 21, 11:20:19 GMT+0 - Resolved - This issue has now been resolved. Jan 21, 11:14:59 GMT+0 - Investigating - We are currently investigating an outage in the services responsible for SSL certificates and transactional email. Creating new SSL certificates is currently not possible. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 5 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:20:19&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This issue has now been resolved..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 21&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:14:59&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating an outage in the services responsible for SSL certificates and transactional email. Creating new SSL certificates is currently not possible..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 21 Jan 2023 11:14:59 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cld5ut8g510147hpokoi8fph73</link>
  <guid>https://pushr.instatus.com/incident/cld5ut8g510147hpokoi8fph73</guid>
</item>

<item>
  <title>Singapore maintenance [Equinix Internet Exchange] </title>
  <description>
    Type: Maintenance
    Duration: 8 hours

    Affected Components: CDN Edge
    Dec 9, 23:00:00 GMT+0 - Completed - Maintenance has completed successfully Dec 9, 15:00:01 GMT+0 - Identified - Maintenance is now in progress Dec 9, 15:00:00 GMT+0 - Identified - We have been informed that there will be a maintenance at the Equinix Internet Exchange in Singapore.
Timezone: CET (UTC+01:00)
December 09, 2022 16:00 - December 10, 2022 00:00
Duration: 8h

During this time latency increase is to be expected. We will be monitoring closely for packet loss and if any is observed, we will temporarily switch traffic from this edge pop to nearby pops to avoid service interruptions. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 8 hours</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have been informed that there will be a maintenance at the Equinix Internet Exchange in Singapore.
Timezone: CET (UTC+01:00)
December 09, 2022 16:00 - December 10, 2022 00:00
Duration: 8h

During this time latency increase is to be expected. We will be monitoring closely for packet loss and if any is observed, we will temporarily switch traffic from this edge pop to nearby pops to avoid service interruptions..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Fri, 9 Dec 2022 15:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/clbf7tk8i038491mvn0g5yqk6</link>
  <guid>https://pushr.instatus.com/maintenance/clbf7tk8i038491mvn0g5yqk6</guid>
</item>

<item>
  <title>DNS connectivity issues</title>
  <description>
    Type: Incident
    Duration: 11 hours and 15 minutes

    Affected Components: Anycast DNS
    Dec 1, 00:23:26 GMT+0 - Investigating - We are currently investigating this incident. PUSHR&#039;s DNS seems to be unreachable from multiple locations globally. Reason not yet known. Some services may be unavailable. Dec 1, 00:42:16 GMT+0 - Identified - We&#039;ve identified the scope of the issue and it seems to affect EU and parts of the middle east. We do not yet know the actual cause, but it appears that our anycast IP prefix is being advertised from our provider that serves all EU locations, but the name server IPs themselves are actually not reachable which leads to a black hole. We are now shutting down all instances in EU and expect that this will route DNS traffic to US and SA. This should provide a temporary fix until we are able to understand this situation better. Dec 1, 01:32:23 GMT+0 - Monitoring - We now have global connectivity restored but will not rush into putting the affected name servers online again. We continue to monitor the situation and to wait on information from upstreams on the root cause of this incident. The team will gather at start of business hours to discuss this unusual event and the way forward to avoid such disruptions in the future. This incident will be updated once again when all name server nodes are re-introduced to the network.  Dec 1, 11:37:58 GMT+0 - Resolved - This incident has been resolved and the issue has been confirmed to be a software error in upstream providers equipment. We&#039;ve re-enabled all affected DNS PoPs and latency is back to normal. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 11 hours and 15 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:23:26&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating this incident. PUSHR&#039;s DNS seems to be unreachable from multiple locations globally. Reason not yet known. Some services may be unavailable..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:42:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We&#039;ve identified the scope of the issue and it seems to affect EU and parts of the middle east. We do not yet know the actual cause, but it appears that our anycast IP prefix is being advertised from our provider that serves all EU locations, but the name server IPs themselves are actually not reachable which leads to a black hole. We are now shutting down all instances in EU and expect that this will route DNS traffic to US and SA. This should provide a temporary fix until we are able to understand this situation better..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;01:32:23&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  We now have global connectivity restored but will not rush into putting the affected name servers online again. We continue to monitor the situation and to wait on information from upstreams on the root cause of this incident. The team will gather at start of business hours to discuss this unusual event and the way forward to avoid such disruptions in the future. This incident will be updated once again when all name server nodes are re-introduced to the network. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Dec &lt;var data-var=&#039;date&#039;&gt; 1&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:37:58&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has been resolved and the issue has been confirmed to be a software error in upstream providers equipment. We&#039;ve re-enabled all affected DNS PoPs and latency is back to normal..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 1 Dec 2022 00:23:26 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/clb4c2w3d5215i5opd5mpwkmu</link>
  <guid>https://pushr.instatus.com/incident/clb4c2w3d5215i5opd5mpwkmu</guid>
</item>

<item>
  <title>London edge unavailable</title>
  <description>
    Type: Incident
    Duration: 1 hour and 52 minutes

    Affected Components: CDN Edge
    Nov 12, 11:47:18 GMT+0 - Investigating - We are currently experiencing an issue with our London edge. Traffic has been rerouted to Amsterdam and no service disruption is observed. We&#039;ve requested help from the data center and will be updating this incident as soon as the problem is identified. Nov 12, 13:39:03 GMT+0 - Resolved - This incident has now been resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 hour and 52 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;11:47:18&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently experiencing an issue with our London edge. Traffic has been rerouted to Amsterdam and no service disruption is observed. We&#039;ve requested help from the data center and will be updating this incident as soon as the problem is identified..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 12&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:39:03&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  This incident has now been resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Sat, 12 Nov 2022 11:47:18 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cladv56le3598i7o491x95028</link>
  <guid>https://pushr.instatus.com/incident/cladv56le3598i7o491x95028</guid>
</item>

<item>
  <title>Analytics data unavailability (13PM - 14PM CET)</title>
  <description>
    Type: Incident
    

    Affected Components: Dashboard
    Nov 8, 15:11:36 GMT+0 - Resolved - Earlier today our team identified an issue with the database that holds the traffic analytics data for all CDN zones. While the source of the issue is still being investigated, all services have already been restored. Unfortunately, we&#039;ve had to discard analytics data for the period between 13:00 to 13:59 to restore the service without additional delays. No actual interruption in storage and serving facilities has taken place.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Nov &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:11:36&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Earlier today our team identified an issue with the database that holds the traffic analytics data for all CDN zones. While the source of the issue is still being investigated, all services have already been restored. Unfortunately, we&#039;ve had to discard analytics data for the period between 13:00 to 13:59 to restore the service without additional delays. No actual interruption in storage and serving facilities has taken place. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 8 Nov 2022 15:11:36 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cla8cohho18015gzln5vy2inru</link>
  <guid>https://pushr.instatus.com/incident/cla8cohho18015gzln5vy2inru</guid>
</item>

<item>
  <title>CDN zone files out of sync</title>
  <description>
    Type: Incident
    Duration: 17 hours and 17 minutes

    Affected Components: CDN Edge
    Oct 19, 15:47:14 GMT+0 - Investigating - We&#039;ve identified a CDN zone configuration issue that partially affects our Tokyo, JP and Johannesburg, ZA. Some zone files have been falling out of sync and may not have all their changes applied. A hotfix is already in testing and we are now temporarily disabling the two edge locations to allow traffic to drain before applying the fix. 
We will then apply the fix globally during night hours. No downtime is expected during this fix. Oct 20, 00:18:14 GMT+0 - Identified - Updates are now starting to take place across our edge network. No downtime or service interruption of any kind are expected. Ideally, this update will be completely unnoticeable, only observable by a slight latency increase. We will be taking our time to do this slowly.  Oct 20, 09:04:20 GMT+0 - Resolved - We&#039;ve completed the required updates and are routing traffic back to the affected edge locations again. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 17 hours and 17 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 19&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:47:14&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We&#039;ve identified a CDN zone configuration issue that partially affects our Tokyo, JP and Johannesburg, ZA. Some zone files have been falling out of sync and may not have all their changes applied. A hotfix is already in testing and we are now temporarily disabling the two edge locations to allow traffic to drain before applying the fix. 
We will then apply the fix globally during night hours. No downtime is expected during this fix..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:18:14&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Updates are now starting to take place across our edge network. No downtime or service interruption of any kind are expected. Ideally, this update will be completely unnoticeable, only observable by a slight latency increase. We will be taking our time to do this slowly. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Oct &lt;var data-var=&#039;date&#039;&gt; 20&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:04:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We&#039;ve completed the required updates and are routing traffic back to the affected edge locations again..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 19 Oct 2022 15:47:14 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cl9ft5abr203229sodgvelzu71</link>
  <guid>https://pushr.instatus.com/incident/cl9ft5abr203229sodgvelzu71</guid>
</item>

<item>
  <title>SFS storage disk array failure</title>
  <description>
    Type: Incident
    Duration: 1 day, 22 hours and 40 minutes

    
    Sep 27, 14:57:36 GMT+0 - Investigating - We are currently investigating a RAID array failure related to our storage product, SFS. There is no evidence of data loss at this time, but drive replacements need to take place. Due to this incident some customers may find that they can not log into their storage spaces. We are working to resolve this issue and will follow up with an update as soon as one is available. Sep 27, 15:29:40 GMT+0 - Identified - The team is now working to isolate the faulty drives from the RAID array as we prepare for their physical replacement. During this stage the storage system will enter a read-only state. Writing new data as well as appending to/changing existing data on the system will not be possible. At this time we still do not have a reason to suspect any data loss. Updates to follow. Sep 27, 20:59:20 GMT+0 - Identified - Preparations for physical replacement of faulty drives have been completed and request has been sent to remote hands service in data centre. We are now awaiting for the replacement to take place. We do not have any indications of data loss, and content continues to be available in a read-only state. Updates to follow. Sep 27, 21:28:16 GMT+0 - Identified - Physical interventions have now begun. Temporary unavailability of content that is not cached on the edge of our CDN network is expected. Updates will follow. Sep 27, 22:30:15 GMT+0 - Identified - Physical drive replacements have now been completed. We are evaluating the scope of the incident in terms of data loss and are starting a rebuild of the array. We can not confirm if there is any data loss at this stage, but we still do not have any data that might suggest so. Updates will continue. Sep 27, 23:33:32 GMT+0 - Identified - The team is observing the rebuild process. Unfortunately, it is currently not possible to restart affected systems with their default operating system images and the recovery processes are taking place on a live (rescue) OS. During this process, which may be lengthy, the SFS storage service will remain unavailable. Due to the nature of the incident it remains unclear if any data has been actually lost, but our findings so far show that data shall be in tact once all rebuild processes are completed. At this point in time customers who are using SFS as their primary origin for their content are advised to switch to their alternative storage source via a pull zone to avoid extended content unavailability, and to switch back to SFS once this issue is resolved. Updates will follow as we see progress on the rebuild process. Sep 28, 00:11:15 GMT+0 - Identified - Current ETA for rebuild of the array is 16 hours. The team will let the processes run and we will temporarily suspend further updates until 10AM CET, ~8 hours from this update. Sep 28, 10:57:54 GMT+0 - Identified - We are continuing to monitor and wait for the RAID rebuild to complete. The process is currently at 70%.  Sep 28, 14:08:38 GMT+0 - Identified - Array recovery is now nearing 90%. We continue to await the completion before proceeding with next steps. Sep 28, 19:43:02 GMT+0 - Identified - The array that holds customers&#039; data has now completed the recovery process. Our course of action at this point is to decide between making the customers&#039; data available immediately by mounting the array in the rescue OS, or to attempt to recover the OS and and the /boot arrays, which would allow us to boot into the on-disk operating system. The latter, if successful, will allow us to completely restore the SFS service without further prolonging the downtime. We&#039;ve decided to attempt this approach at the expense of a short additional downtime and are now starting the procedures that are required. The arrays that remain to be recovered are small in size and expected recovery time should be very short on these. However, this incident should still be considered ongoing to full extend and until further notice the SFS service remains unavailable. Sep 28, 21:05:05 GMT+0 - Identified - We have managed to recover all arrays and we are now verifying that content is actually in place and not lost. At this moment we don&#039;t see any missing data. The team is facing issues with booting into the on-disk OS which is resulting in our inability to bring the service back up. Efforts will continue and next update will follow after 10AM CET or, if progress is being made, before that time.  Sep 29, 10:55:46 GMT+0 - Monitoring - The team has managed to restore the entire system and services are now back up. No data loss has been reported for this incident. While we continue to monitor the situation and still need to bring some aux services up, we want to thank all customers for their patience and understanding during this incident. Sep 29, 13:37:21 GMT+0 - Resolved - We are closing this incident now. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 day, 22 hours and 40 minutes</p>
    
    &lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:57:36&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Investigating&lt;/strong&gt; -
  We are currently investigating a RAID array failure related to our storage product, SFS. There is no evidence of data loss at this time, but drive replacements need to take place. Due to this incident some customers may find that they can not log into their storage spaces. We are working to resolve this issue and will follow up with an update as soon as one is available..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:29:40&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The team is now working to isolate the faulty drives from the RAID array as we prepare for their physical replacement. During this stage the storage system will enter a read-only state. Writing new data as well as appending to/changing existing data on the system will not be possible. At this time we still do not have a reason to suspect any data loss. Updates to follow..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;20:59:20&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Preparations for physical replacement of faulty drives have been completed and request has been sent to remote hands service in data centre. We are now awaiting for the replacement to take place. We do not have any indications of data loss, and content continues to be available in a read-only state. Updates to follow..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:28:16&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Physical interventions have now begun. Temporary unavailability of content that is not cached on the edge of our CDN network is expected. Updates will follow..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:30:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Physical drive replacements have now been completed. We are evaluating the scope of the incident in terms of data loss and are starting a rebuild of the array. We can not confirm if there is any data loss at this stage, but we still do not have any data that might suggest so. Updates will continue..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;23:33:32&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The team is observing the rebuild process. Unfortunately, it is currently not possible to restart affected systems with their default operating system images and the recovery processes are taking place on a live (rescue) OS. During this process, which may be lengthy, the SFS storage service will remain unavailable. Due to the nature of the incident it remains unclear if any data has been actually lost, but our findings so far show that data shall be in tact once all rebuild processes are completed. At this point in time customers who are using SFS as their primary origin for their content are advised to switch to their alternative storage source via a pull zone to avoid extended content unavailability, and to switch back to SFS once this issue is resolved. Updates will follow as we see progress on the rebuild process..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;00:11:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Current ETA for rebuild of the array is 16 hours. The team will let the processes run and we will temporarily suspend further updates until 10AM CET, ~8 hours from this update..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:57:54&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are continuing to monitor and wait for the RAID rebuild to complete. The process is currently at 70%. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;14:08:38&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Array recovery is now nearing 90%. We continue to await the completion before proceeding with next steps..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;19:43:02&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  The array that holds customers&#039; data has now completed the recovery process. Our course of action at this point is to decide between making the customers&#039; data available immediately by mounting the array in the rescue OS, or to attempt to recover the OS and and the /boot arrays, which would allow us to boot into the on-disk operating system. The latter, if successful, will allow us to completely restore the SFS service without further prolonging the downtime. We&#039;ve decided to attempt this approach at the expense of a short additional downtime and are now starting the procedures that are required. The arrays that remain to be recovered are small in size and expected recovery time should be very short on these. However, this incident should still be considered ongoing to full extend and until further notice the SFS service remains unavailable..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;21:05:05&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We have managed to recover all arrays and we are now verifying that content is actually in place and not lost. At this moment we don&#039;t see any missing data. The team is facing issues with booting into the on-disk OS which is resulting in our inability to bring the service back up. Efforts will continue and next update will follow after 10AM CET or, if progress is being made, before that time. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:55:46&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; -
  The team has managed to restore the entire system and services are now back up. No data loss has been reported for this incident. While we continue to monitor the situation and still need to bring some aux services up, we want to thank all customers for their patience and understanding during this incident..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 29&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:37:21&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  We are closing this incident now..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 27 Sep 2022 14:57:36 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cl8kbopxy33258xeola1kvn8s8</link>
  <guid>https://pushr.instatus.com/incident/cl8kbopxy33258xeola1kvn8s8</guid>
</item>

<item>
  <title>Immediate migration of Denmark edge to new data centre</title>
  <description>
    Type: Incident
    Duration: 28 days, 1 hour and 1 minute

    Affected Components: CDN Edge
    Aug 30, 09:37:11 GMT+0 - Identified - Due to unforeseen circumstances our team needs to migrate our edge in Denmark to a different data centre (Skanderborg -&gt; Albertslund, DK). Traffic to this edge location has been rerouted to nearby countries temporarily as we begin the migration without delays. A temporary increase in latency may be observed by users from Denmark. Sep 27, 10:38:31 GMT+0 - Resolved - Resolved. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 28 days, 1 hour and 1 minute</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Aug &lt;var data-var=&#039;date&#039;&gt; 30&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:37:11&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Due to unforeseen circumstances our team needs to migrate our edge in Denmark to a different data centre (Skanderborg -&gt; Albertslund, DK). Traffic to this edge location has been rerouted to nearby countries temporarily as we begin the migration without delays. A temporary increase in latency may be observed by users from Denmark..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Sep &lt;var data-var=&#039;date&#039;&gt; 27&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:38:31&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Resolved&lt;/strong&gt; -
  Resolved..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 30 Aug 2022 09:37:11 +0000</pubDate>
  <link>https://pushr.instatus.com/incident/cl7fzwt8j31101jvn4n8rjlzjs</link>
  <guid>https://pushr.instatus.com/incident/cl7fzwt8j31101jvn4n8rjlzjs</guid>
</item>

<item>
  <title>Prague scheduled maintenance 09/06/2022</title>
  <description>
    Type: Maintenance
    Duration: 5 days

    Affected Components: Anycast DNS, CDN Edge
    Jun 9, 10:37:00 GMT+0 - Identified - During this maintenance window we will be decommissioning our anycast DNS service in Prague, CZ, and will also be making changes to the existing edge infrastructure. As a result traffic from the Czech republic will be rerouted to nearby data centres temporarily. We do not expect any service disruption during this maintenance window. ETA for completion of the maintenance is 14/06/2022. Jun 14, 10:37:00 GMT+0 - Completed - Maintenance has completed successfully Jun 9, 10:37:01 GMT+0 - Identified - Maintenance is now in progress 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 5 days</p>
    <p><strong>Affected Components:</strong> , </p>
    &lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:37:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  During this maintenance window we will be decommissioning our anycast DNS service in Prague, CZ, and will also be making changes to the existing edge infrastructure. As a result traffic from the Czech republic will be rerouted to nearby data centres temporarily. We do not expect any service disruption during this maintenance window. ETA for completion of the maintenance is 14/06/2022..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 14&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:37:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Jun &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:37:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Thu, 9 Jun 2022 10:37:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cl45h03jm0636kvoimgy9sm84</link>
  <guid>https://pushr.instatus.com/maintenance/cl45h03jm0636kvoimgy9sm84</guid>
</item>

<item>
  <title>Cache purge system updates</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 40 minutes

    Affected Components: CDN Edge, API, Dashboard
    Mar 16, 10:47:01 GMT+0 - Identified - Maintenance is now in progress Mar 16, 10:47:01 GMT+0 - Identified - Maintenance is now in progress Mar 16, 12:27:23 GMT+0 - Completed - Maintenance has been completed successfully. Purge functionalities have been re-enabled. Mar 16, 10:47:00 GMT+0 - Identified - We are pushing an update to the cache purge system. This update fixes an issue which could cause some purge requests to not be executed properly on all edge servers. During this maintenance window purging  will temporarily be unavailable.  
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 40 minutes</p>
    <p><strong>Affected Components:</strong> , , </p>
    &lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:47:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:47:01&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress.&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:27:23&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has been completed successfully. Purge functionalities have been re-enabled..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Mar &lt;var data-var=&#039;date&#039;&gt; 16&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;10:47:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We are pushing an update to the cache purge system. This update fixes an issue which could cause some purge requests to not be executed properly on all edge servers. During this maintenance window purging  will temporarily be unavailable. .&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 16 Mar 2022 10:47:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/cl0tfs76x2579408nmyiu1ki9s8</link>
  <guid>https://pushr.instatus.com/maintenance/cl0tfs76x2579408nmyiu1ki9s8</guid>
</item>

<item>
  <title>Sofia hardware upgrades</title>
  <description>
    Type: Maintenance
    Duration: 1 day, 23 hours and 8 minutes

    Affected Components: CDN Edge
    Feb 11, 12:36:05 GMT+0 - Completed - Maintenance was completed successfully. Feb 9, 15:27:00 GMT+0 - Identified - We will be replacing hardware in Sofia, Bulgaria, on 09 February 2022. To avoid availability issues, traffic from Bulgaria has been temporarily rerouted to nearby data centres. Feb 9, 13:28:15 GMT+0 - Identified - Maintenance is now in progress. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 day, 23 hours and 8 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 11&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;12:36:05&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance was completed successfully..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:27:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be replacing hardware in Sofia, Bulgaria, on 09 February 2022. To avoid availability issues, traffic from Bulgaria has been temporarily rerouted to nearby data centres..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:28:15&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  Maintenance is now in progress..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 9 Feb 2022 15:27:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/ckzea1un21591328ebn5bdjkhimy</link>
  <guid>https://pushr.instatus.com/maintenance/ckzea1un21591328ebn5bdjkhimy</guid>
</item>

<item>
  <title>Helsinki hardware update</title>
  <description>
    Type: Maintenance
    Duration: 1 hour and 56 minutes

    Affected Components: CDN Edge
    Feb 9, 13:27:30 GMT+0 - Completed - Maintenance has been completed. Feb 9, 15:24:00 GMT+0 - Identified - We will be replacing hardware in Helsinki, Finland, on 09 February 2022. To avoid availability issues, traffic from Finland has been temporarily rerouted to nearby data centres. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour and 56 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;13:27:30&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has been completed..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 9&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:24:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  We will be replacing hardware in Helsinki, Finland, on 09 February 2022. To avoid availability issues, traffic from Finland has been temporarily rerouted to nearby data centres..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Wed, 9 Feb 2022 15:24:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/ckzea0tj31589783ebn5s9u9p00w</link>
  <guid>https://pushr.instatus.com/maintenance/ckzea0tj31589783ebn5s9u9p00w</guid>
</item>

<item>
  <title>Frankfurt Edge upgrades</title>
  <description>
    Type: Maintenance
    Duration: 2 days, 11 hours and 14 minutes

    Affected Components: CDN Edge
    Jan 25, 22:00:00 GMT+0 - Identified - In order to support the compute needs of our continuous development of new and exciting features for 2022 we will be replacing existing edge servers in Frankfurt with newer, more powerful ones. This maintenance window is scheduled for 14 days, starting Jan 26th through Feb 08th. To avoid service disruptions all traffic in Germany has been temporarily rerouted to our nearby data centres. A slight increase in latency is expected to be the only impact of this maintenance. Updates will follow. Feb 8, 15:06:50 GMT+0 - Identified - This maintenance windows is being extended by 2 days to February 10th, 2022.  Apr 28, 09:14:17 GMT+0 - Completed - Maintenance has completed successfully. 
  </description>
  <content:encoded>
    <![CDATA[<p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 2 days, 11 hours and 14 minutes</p>
    <p><strong>Affected Components:</strong> </p>
    &lt;p&gt;&lt;small&gt;Jan &lt;var data-var=&#039;date&#039;&gt; 25&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;22:00:00&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  In order to support the compute needs of our continuous development of new and exciting features for 2022 we will be replacing existing edge servers in Frankfurt with newer, more powerful ones. This maintenance window is scheduled for 14 days, starting Jan 26th through Feb 08th. To avoid service disruptions all traffic in Germany has been temporarily rerouted to our nearby data centres. A slight increase in latency is expected to be the only impact of this maintenance. Updates will follow..&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Feb &lt;var data-var=&#039;date&#039;&gt; 8&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;15:06:50&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Identified&lt;/strong&gt; -
  This maintenance windows is being extended by 2 days to February 10th, 2022. .&lt;/p&gt;
&lt;p&gt;&lt;small&gt;Apr &lt;var data-var=&#039;date&#039;&gt; 28&lt;/var&gt;, &lt;var data-var=&#039;time&#039;&gt;09:14:17&lt;/var&gt; GMT+0&lt;/small&gt;&lt;br&gt;&lt;strong&gt;Completed&lt;/strong&gt; -
  Maintenance has completed successfully..&lt;/p&gt;
]]>
  </content:encoded>
  <pubDate>Tue, 25 Jan 2022 22:00:00 +0000</pubDate>
  <link>https://pushr.instatus.com/maintenance/ckyu30jw1133740crobg2en8vwy</link>
  <guid>https://pushr.instatus.com/maintenance/ckyu30jw1133740crobg2en8vwy</guid>
</item>

  </channel>
  </rss>