[arch-commits] Commit in python-pytorch/trunk (2 files)

Sven-Hendrik Haase svenstaro at gemini.archlinux.org
Thu May 5 15:45:02 UTC 2022


    Date: Thursday, May 5, 2022 @ 15:45:02
  Author: svenstaro
Revision: 1195504

upgpkg: python-pytorch 1.11.0-7: Fix FS#74593

Added:
  python-pytorch/trunk/98f9ff90268ae62ab6d794cce0786121bf17edc9.patch
Modified:
  python-pytorch/trunk/PKGBUILD

------------------------------------------------+
 98f9ff90268ae62ab6d794cce0786121bf17edc9.patch |   44 +++++++++++++++++++++++
 PKGBUILD                                       |    7 +++
 2 files changed, 50 insertions(+), 1 deletion(-)

Added: 98f9ff90268ae62ab6d794cce0786121bf17edc9.patch
===================================================================
--- 98f9ff90268ae62ab6d794cce0786121bf17edc9.patch	                        (rev 0)
+++ 98f9ff90268ae62ab6d794cce0786121bf17edc9.patch	2022-05-05 15:45:02 UTC (rev 1195504)
@@ -0,0 +1,44 @@
+From 98f9ff90268ae62ab6d794cce0786121bf17edc9 Mon Sep 17 00:00:00 2001
+From: BowenBao <bowbao at microsoft.com>
+Date: Thu, 17 Feb 2022 10:45:24 -0800
+Subject: [PATCH] [ONNX] Fix an assertion failure involving Slice (#71965)
+
+Before this change, exporting a model to ONNX involving Slice crashes at `axes[i]` in line 153 if C++ assertions are enabled:
+```
+/usr/include/c++/11.1.0/bits/stl_vector.h:1045: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = long int; _Alloc = std::allocator<long int>; std::vector<_Tp, _Alloc>::reference = long int&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__n < this->size()' failed.
+```
+The relevant check is https://github.com/gcc-mirror/gcc/blob/releases/gcc-11.1.0/libstdc++-v3/include/bits/stl_vector.h#L1045, which checks the vector index.
+
+The issue can be reproduced by exporting Mask R-CNN or similar ones. For example,
+```Python
+import io
+import torch
+import torchvision as tv
+
+model = tv.models.detection.maskrcnn_resnet50_fpn(pretrained=False)
+x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
+with io.BytesIO() as f:
+    torch.onnx.export(model, x, f, opset_version=11)
+```
+(extracted from [onnxoptimizer tests](https://github.com/onnx/optimizer/blob/master/onnxoptimizer/test/optimizer_test.py))
+
+Tested environment: Arch Linux x86_64 with pytorch and torchvisoin installed from [the official repo](https://github.com/archlinux/svntogit-community/blob/packages/python-pytorch/trunk/PKGBUILD) and [AUR](https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=python-torchvision), respectively.
+
+Pull Request resolved: https://github.com/pytorch/pytorch/pull/72989
+---
+ torch/csrc/jit/passes/onnx/constant_fold.cpp | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/torch/csrc/jit/passes/onnx/constant_fold.cpp b/torch/csrc/jit/passes/onnx/constant_fold.cpp
+index 2901a9b8043c0..e52d77d04c756 100644
+--- a/torch/csrc/jit/passes/onnx/constant_fold.cpp
++++ b/torch/csrc/jit/passes/onnx/constant_fold.cpp
+@@ -147,7 +147,7 @@ c10::optional<at::Tensor> runTorchSlice_opset10(
+       return c10::nullopt;
+     }
+     auto axes_a = inputTensorValues[3].accessor<int64_t, 1>();
+-    axes.reserve(inputTensorValues[3].sizes()[0]);
++    axes.resize(inputTensorValues[3].sizes()[0]);
+     // ONNX slice accepts negative axis, fix this for aten op
+     for (const auto i : c10::irange(inputTensorValues[3].sizes()[0])) {
+       axes[i] = axes_a[i] < 0 ? axes_a[i] + inputTensorValues[0].sizes().size()

Modified: PKGBUILD
===================================================================
--- PKGBUILD	2022-05-05 15:15:47 UTC (rev 1195503)
+++ PKGBUILD	2022-05-05 15:45:02 UTC (rev 1195504)
@@ -6,7 +6,7 @@
 pkgname=("${pkgbase}" "${pkgbase}-cuda")
 pkgver=1.11.0
 _pkgver=1.11.0
-pkgrel=6
+pkgrel=7
 _pkgdesc='Tensors and Dynamic neural networks in Python with strong GPU acceleration'
 pkgdesc="${_pkgdesc}"
 arch=('x86_64')
@@ -61,6 +61,7 @@
         fix_include_system.patch
         use-system-libuv.patch
         fix-building-for-torchvision.patch
+        98f9ff90268ae62ab6d794cce0786121bf17edc9.patch
         ffmpeg4.4.patch)
 sha256sums=('SKIP'
             'SKIP'
@@ -104,6 +105,7 @@
             '557761502bbd994d9795bef46779e4b8c60ba0b45e7d60841f477d3b7f28a00a'
             'cd9ac4aaa9f946ac5eafc57cf66c5c16b3ea7ac8af32c2558fad0705411bb669'
             '600bd6a4bbcec9f99ab815d82cee1c2875530b2b75f4010da5ba72ce9bf31aff'
+            'cf6ec8e4952765b190e1cae247a814dd1e6b3e9c8b3ad5600118d69d6faa6eb5'
             '75001b59e76831b0c93a547f851cb980e00b0d8cc7b66fb507eaeac217dc6ff9')
 options=('!lto')
 
@@ -161,6 +163,9 @@
   # https://bugs.archlinux.org/task/64981
   patch -N torch/utils/cpp_extension.py "${srcdir}"/fix_include_system.patch
 
+  # Fix https://bugs.archlinux.org/task/74593
+  patch -Np1 -i "${srcdir}"/98f9ff90268ae62ab6d794cce0786121bf17edc9.patch
+
   # Use system libuv
   patch -Np1 -i "${srcdir}"/use-system-libuv.patch
 



More information about the arch-commits mailing list