Skip to content

Commit

Permalink
Add imprecise comparison variants to SnapshotTesting extension
Browse files Browse the repository at this point in the history
This adds variants of the `Snapshotting` extensions that allow for imprecise comparisons, i.e. using the `precision` and `perceptualPrecision` parameters.

 ## Why is this necessary?

Adding precision parameters has been a highly requested feature (see #63) to work around some simulator changes introduced in iOS 13. Historically the simulator has supported CPU-based rendering, giving us very stable image representations of views that we can compare pixel-by-pixel. Unfortunately, with iOS 13, Apple changed the simulator to use exclusively GPU-based rendering, which means that the resulting snapshots may differ slightly across machines (see pointfreeco/swift-snapshot-testing#313).

The negative effects of this were mitigated in SnapshotTesting by adding two precision controls to snapshot comparisons: a **perceptual precision** that controls how close in color two pixels need to be to count as unchanged (using the Lab ΔE distance between colors) and an overall **precision** that controls what portion of pixels between two images need to be the same (based on the per-pixel calculation) for the images to be considered unchanged. Setting these precisions to non-one values enables engineers to record tests on one machine and run them on another (e.g. record new reference images on their laptop and then run tests on CI) without worrying about the tests failing due to differences in GPU rendering. This is great in theory, but from our testing we've found even the lowest tolerances (near-one precision values) to consistently handle GPU differences between machine types let through a significant number of visual regressions. In other words, there is no magic set of precision values that avoids false negatives based on GPU rendering and also avoids false positives based on minor visual regressions.

This is especially true for accessibility snapshots. To start, tolerances seem to be more reliable when applied to relatively small snapshot images, but accessibility snapshots tend to be fairly large since they include both the view and the legend. Additionally, the text in the legend can change meaningfully and reflect only a small number of pixel changes. For example, I ran a test of full screen snapshot on an iPhone 12 Pro with two columns of legend. Even a precision of `0.9999` (99.99%) was enough to let through a regression where one of the elements lost its `.link` trait (represented by the text "Link." appended to the element's description in the snapshot). But this high a precision _wasn't_ enough to handle the GPU rendering differences between a MacBook Pro and a Mac Mini. This is a simplified example since it only uses `precision`, not `perceptualPrecision`, but we've found many similar situations arise even with the combination.

Some teams have developed infrastructure to allow snapshots to run on the same hardware consistently and have built a developer process around that infrastructure, but many others have accepted lowering precision as a necessity today.

 ## Why create separate "imprecise" variants?

The simplest approach to adding tolerances would be adding the `precision` and `perceptualPrecision` parameters to the existing snapshot methods, however I feel adding separate methods with an "imprecise" prefix is better in the long run. The naming is motivated by the idea that **it needs to be very obvious when what you're doing might result in unexpected/undesirable behavior**. In other words, when using one of the core snapshot variants, you should have extremely high confidence that a test passing means there's no regressions. When you use an "imprecise" variant, it's up to you to set your confidence levels according to your chosen precision values. This is similar to the "unsafe" terminology around memory in the Swift API. You should generally feel very confident in the memory safety of your code, but any time you see "unsafe" it's a sign to be extra careful and not gather unwarranted confidence from the compiler.

Longer term, I'm hopeful we can find alternative comparison algorithms that allow for GPU rendering differences without opening the door to regressions. We can integrate these into the core snapshot variants as long as they do not introduce opportunities for regressions, or add additional comparison variants to iterate on different approaches.
  • Loading branch information
NickEntin committed Aug 16, 2023
1 parent f41a0d5 commit 69d57bb
Show file tree
Hide file tree
Showing 2 changed files with 192 additions and 81 deletions.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
//
// Copyright 2020 Square Inc.
// Copyright 2023 Block Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -48,50 +48,15 @@ extension Snapshotting where Value == UIView, Format == UIImage {
drawHierarchyInKeyWindow: Bool = false,
markerColors: [UIColor] = []
) -> Snapshotting {
guard isRunningInHostApplication else {
fatalError("Accessibility snapshot tests cannot be run in a test target without a host application")
}

return Snapshotting<UIView, UIImage>
.image(drawHierarchyInKeyWindow: drawHierarchyInKeyWindow)
.pullback { view in
let containerView = AccessibilitySnapshotView(
containedView: view,
viewRenderingMode: drawHierarchyInKeyWindow ? .drawHierarchyInRect : .renderLayerInContext,
markerColors: markerColors,
activationPointDisplayMode: activationPointDisplayMode
)

let window = UIWindow(frame: UIScreen.main.bounds)
window.makeKeyAndVisible()
containerView.center = window.center
window.addSubview(containerView)

do {
try containerView.parseAccessibility(useMonochromeSnapshot: useMonochromeSnapshot)
} catch AccessibilitySnapshotView.Error.containedViewExceedsMaximumSize {
fatalError(
"""
View is too large to render monochrome snapshot. Try setting useMonochromeSnapshot to false or \
use a different iOS version. In particular, this is known to fail on iOS 13, but was fixed in \
iOS 14.
"""
)
} catch AccessibilitySnapshotView.Error.containedViewHasUnsupportedTransform {
fatalError(
"""
View has an unsupported transform for the specified snapshot parameters. Try using an identity \
transform or changing the view rendering mode to render the layer in the graphics context.
"""
)
} catch {
fatalError("Failed to render snapshot image")
}

containerView.sizeToFit()

return containerView
}
// For now this calls through to the imprecise variant, but should eventually use an alternate comparison
// algorithm that... TODO
return .impreciseAccessibilityImage(
showActivationPoints: activationPointDisplayMode,
useMonochromeSnapshot: useMonochromeSnapshot,
drawHierarchyInKeyWindow: drawHierarchyInKeyWindow,
markerColors: markerColors,
precision: 1
)
}

/// Snapshots the current view using the specified content size category to test Dynamic Type.
Expand All @@ -110,42 +75,7 @@ extension Snapshotting where Value == UIView, Format == UIImage {

/// Snapshots the current view simulating the way it will appear with Smart Invert Colors enabled.
public static var imageWithSmartInvert: Snapshotting {
func postNotification() {
NotificationCenter.default.post(
name: UIAccessibility.invertColorsStatusDidChangeNotification,
object: nil,
userInfo: nil
)
}

return Snapshotting<UIImage, UIImage>.image.pullback { view in
let requiresWindow = (view.window == nil && !(view is UIWindow))

if requiresWindow {
let window = UIApplication.shared.firstKeyWindow ?? UIWindow(frame: UIScreen.main.bounds)
window.addSubview(view)
}

view.layoutIfNeeded()

let statusUtility = UIAccessibilityStatusUtility()
statusUtility.mockInvertColorsStatus()
postNotification()

let renderer = UIGraphicsImageRenderer(bounds: view.bounds)
let image = renderer.image { context in
view.drawHierarchyWithInvertedColors(in: view.bounds, using: context)
}

statusUtility.unmockStatuses()
postNotification()

if requiresWindow {
view.removeFromSuperview()
}

return image
}
return .impreciseImageWithSmartInvert(precision: 1)
}

// MARK: - Internal Properties
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
//
// Copyright 2023 Block Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//

import SnapshotTesting
import UIKit

#if SWIFT_PACKAGE
import AccessibilitySnapshotCore
import AccessibilitySnapshotCore_ObjC
#endif

extension Snapshotting where Value == UIView, Format == UIImage {

/// Snapshots the current view with colored overlays of each accessibility element it contains, as well as an
/// approximation of the description that VoiceOver will read for each element.
///
/// - Important: Using a `precision` less than 1 may result in allowing regressions through.
///
/// - parameter showActivationPoints: When to show indicators for elements' accessibility activation points.
/// Defaults to showing activation points only when they are different than the default activation point for that
/// element.
/// - parameter useMonochromeSnapshot: Whether or not the snapshot of the `view` should be monochrome. Using a
/// monochrome snapshot makes it more clear where the highlighted elements are, but may make it difficult to
/// read certain views. Defaults to `true`.
/// - parameter drawHierarchyInKeyWindow: Whether or not to draw the view hierachy in the key window, rather than
/// rendering the view's layer. This enables the rendering of `UIAppearance` and `UIVisualEffect`s.
/// - parameter markerColors: The array of colors which will be chosen from when creating the overlays
/// - parameter precision: The portion of pixels that must match for the image to be consider "unchanged". Value
/// must be in the range `[0,1]`, where `0` means no pixels must match and `1` means all pixels must match.
public static func impreciseAccessibilityImage(
showActivationPoints activationPointDisplayMode: ActivationPointDisplayMode = .whenOverridden,
useMonochromeSnapshot: Bool = true,
drawHierarchyInKeyWindow: Bool = false,
markerColors: [UIColor] = [],
precision: Float
) -> Snapshotting {
guard isRunningInHostApplication else {
fatalError("Accessibility snapshot tests cannot be run in a test target without a host application")
}

return Snapshotting<UIView, UIImage>
.image(drawHierarchyInKeyWindow: drawHierarchyInKeyWindow, precision: precision)
.pullback { view in
let containerView = AccessibilitySnapshotView(
containedView: view,
viewRenderingMode: drawHierarchyInKeyWindow ? .drawHierarchyInRect : .renderLayerInContext,
markerColors: markerColors,
activationPointDisplayMode: activationPointDisplayMode
)

let window = UIWindow(frame: UIScreen.main.bounds)
window.makeKeyAndVisible()
containerView.center = window.center
window.addSubview(containerView)

do {
try containerView.parseAccessibility(useMonochromeSnapshot: useMonochromeSnapshot)
} catch AccessibilitySnapshotView.Error.containedViewExceedsMaximumSize {
fatalError(
"""
View is too large to render monochrome snapshot. Try setting useMonochromeSnapshot to false or \
use a different iOS version. In particular, this is known to fail on iOS 13, but was fixed in \
iOS 14.
"""
)
} catch AccessibilitySnapshotView.Error.containedViewHasUnsupportedTransform {
fatalError(
"""
View has an unsupported transform for the specified snapshot parameters. Try using an identity \
transform or changing the view rendering mode to render the layer in the graphics context.
"""
)
} catch {
fatalError("Failed to render snapshot image")
}

containerView.sizeToFit()

return containerView
}
}

/// Snapshots the current view simulating the way it will appear with Smart Invert Colors enabled.
public static func impreciseImageWithSmartInvert(precision: Float) -> Snapshotting {
func postNotification() {
NotificationCenter.default.post(
name: UIAccessibility.invertColorsStatusDidChangeNotification,
object: nil,
userInfo: nil
)
}

return Snapshotting<UIImage, UIImage>.image.pullback { view in
let requiresWindow = (view.window == nil && !(view is UIWindow))

if requiresWindow {
let window = UIApplication.shared.firstKeyWindow ?? UIWindow(frame: UIScreen.main.bounds)
window.addSubview(view)
}

view.layoutIfNeeded()

let statusUtility = UIAccessibilityStatusUtility()
statusUtility.mockInvertColorsStatus()
postNotification()

let renderer = UIGraphicsImageRenderer(bounds: view.bounds)
let image = renderer.image { context in
view.drawHierarchyWithInvertedColors(in: view.bounds, using: context)
}

statusUtility.unmockStatuses()
postNotification()

if requiresWindow {
view.removeFromSuperview()
}

return image
}
}

}

extension Snapshotting where Value == UIViewController, Format == UIImage {

/// Snapshots the current view with colored overlays of each accessibility element it contains, as well as an
/// approximation of the description that VoiceOver will read for each element.
///
/// - parameter showActivationPoints: When to show indicators for elements' accessibility activation points.
/// Defaults to showing activation points only when they are different than the default activation point for that
/// element.
/// - parameter useMonochromeSnapshot: Whether or not the snapshot of the `view` should be monochrome. Using a
/// monochrome snapshot makes it more clear where the highlighted elements are, but may make it difficult to
/// read certain views. Defaults to `true`.
/// - parameter drawHierarchyInKeyWindow: Whether or not to draw the view hierachy in the key window, rather than
/// rendering the view's layer. This enables the rendering of `UIAppearance` and `UIVisualEffect`s.
/// - parameter markerColors: The array of colors which will be chosen from when creating the overlays
public static func impreciseAccessibilityImage(
showActivationPoints activationPointDisplayMode: ActivationPointDisplayMode = .whenOverridden,
useMonochromeSnapshot: Bool = true,
drawHierarchyInKeyWindow: Bool = false,
markerColors: [UIColor] = [],
precision: Float
) -> Snapshotting {
return Snapshotting<UIView, UIImage>
.impreciseAccessibilityImage(
showActivationPoints: activationPointDisplayMode,
useMonochromeSnapshot: useMonochromeSnapshot,
drawHierarchyInKeyWindow: drawHierarchyInKeyWindow,
markerColors: markerColors,
precision: precision
)
.pullback { viewController in
viewController.view
}
}

/// Snapshots the current view simulating the way it will appear with Smart Invert Colors enabled.
public static func impreciseImageWithSmartInvert(precision: Float) -> Snapshotting {
return Snapshotting<UIView, UIImage>
.impreciseImageWithSmartInvert(precision: precision)
.pullback { viewController in
viewController.view
}
}

}

0 comments on commit 69d57bb

Please sign in to comment.