Android[Exception][java.lang.UnsatisfiedLinkError]

上篇二维码的应用使用了jni代码,在Nexus5X(Android7.1.1)上会crash,抛出了如下异常:

AndroidRuntime: java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[zip file “/data/app/…/base.apk”],nativeLibraryDirectories=[/data/app/…/lib/arm64, /system/fake-libs64, /data/app/…/base.apk!lib/arm64-v8a, /system/lib64, /system/lib64, /vendor/lib64]]] couldn’t find “libxxx.so”
AndroidRuntime: at java.lang.Runtime.loadLibrary…

在apk包里找不到对应的so文件,在指定的目录下也找不到,于是抛出异常;可以看到lib加载的类型是arm64-v8a,的确是没有,我们来看看别人是怎么说的。
http://blog.csdn.net/qiuchangyong/article/details/50040579

11.APP_ABI目前能取得值包括:(1)、32位:armeabi、armeabi-v7a、x86、mips;(2)、64位:arm64-v8a,x86_64, mips64;

12.注意事项:(1)、目前模拟器只有x86_64的没有arm64-v8a的;(2)、在用真机测试armv8-a时,最好先通过adb shell, cat  /proc/cpuinfo ,来查看下真机是否是支持armv8-a;(3)、arm32和arm64有些配置参数不能共存,如-msoft-float仅在arm32位下支持,在arm64位下是不支持的.

13.使用这个命令可以获得本机的arch:adb shell getprop ro.product.cpu.abi

在本机模拟器上运行时,发现也抛出了这个异常,缺少x86_64导致的,顺利解决。

Android[二维码]ZXing的应用(二)

本以为大功告成,可以歇息了,可是测试的一大波问题来了。。。
1、对准电脑屏幕上的二维码,反复对焦就是扫不出结果
2、扫码界面锁屏后,再打开,画面停止
3、Nexus 5X(Android7.1.1)扫描界面相机画面上下左右颠倒

先说说问题2,上真机调试,发现CaptureActivity中的hasSurface值在onPause为false,搜索该值的赋值,居然是抄错了地方。。。

   @Override
    public void surfaceCreated(SurfaceHolder holder) {
        if (holder == null) {
            Log.e(TAG, "*** WARNING *** surfaceCreated() gave us a null surface!");
        }
        if (!hasSurface) {
            hasSurface = true;
            initCamera(holder);
        }
    }
 
    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
    }
 
    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
        hasSurface = false;
    }

应该是在surfaceDestroyed时重置,结果写在了surfaceChanged中,囧。。。

再看看问题1,这是个很有意思的问题,原因在前一篇文章中解码过程将横竖屏切换导致的,上代码:

    // 横屏换竖屏
    byte[] rotatedData = new byte[data.length];
    for (int y = 0; y < height; y++) {
      for (int x = 0; x < width; x++)
        rotatedData[x * height + height - y - 1] = data[x + y * width];
    }

实际中真机测试,这个横屏换竖屏的时间开销很大,大约在15000ms;我们再来看看自动对焦的问题:

   private static final long AUTO_FOCUS_INTERVAL_MS = 2000L;

自动对焦是在上一次对焦结束后,等待该时间后,再启动新一轮的对焦动作,结合上面的时间开销,大家不难想象到为什么出现反复对焦的问题,特别点,将这个自动对焦间隔改成1s甚至更短。

找到了问题,解决就很简单了,初步动作是采用jni代码替代上面java部分的横竖屏切换

   JNIEXPORT jbyteArray JNICALL Java_xxxxxx_rotateSource
  (JNIEnv *env, jclass clz, jbyteArray jdata, jint dataWidth, jint dataHeight)
{
    jbyteArray jbuf = env->NewByteArray(env->GetArrayLength(jdata));
    jboolean result = JNI_FALSE;
    jbyte* pRawData = env->GetByteArrayElements(jbuf, &result);
 
    char *buffer = (char *) env->GetByteArrayElements(jdata, JNI_FALSE);
    for (int y = 0; y < dataHeight; y++) {
        for (int x = 0; x < dataWidth; x++) {
            pRawData[x * dataHeight + dataHeight - y - 1] = buffer[x + y * dataWidth];
        }
    }
 
    env->ReleaseByteArrayElements(jdata, (jbyte *) buffer, 0);
    env->ReleaseByteArrayElements(jbuf, pRawData, JNI_ABORT);
    return jbuf;
}

采用native的rotateSource方法,真机实测2560*1440的相机平均只消耗了150ms不到,虽然jni上下文切换耗时了点,可这效率比java代码真心高了10倍。如果我们进一步优化,可以不用将横屏数据全部转换,只取扫描区域的像素就足够了,这样会更快;甚至更进一步,将相机数据直接扔给jni解码,这样会更快,给一个全jni解码的项目:https://github.com/heiBin/QrCodeScanner

另外,解码的时间开销也可以省一部分,如果你只解码二维码,可以只指定两种解码格式:

   // The prefs can't change while the thread is running, so pick them up once here.
    if (decodeFormats == null || decodeFormats.isEmpty()) {
      SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(activity);
      decodeFormats = EnumSet.noneOf(BarcodeFormat.class);
      if (prefs.getBoolean(PreferencesActivity.KEY_DECODE_QR, true)) {
        decodeFormats.addAll(DecodeFormatManager.QR_CODE_FORMATS);
      }
      if (prefs.getBoolean(PreferencesActivity.KEY_DECODE_DATA_MATRIX, true)) {
        decodeFormats.addAll(DecodeFormatManager.DATA_MATRIX_FORMATS);
      }
    }
    hints.put(DecodeHintType.POSSIBLE_FORMATS, decodeFormats);

关于问题3,咱们先看一篇关于相机orientation的文章,扫盲一下:

http://blog.csdn.net/wangbaochu/article/details/44345903

咱们对照着再看看代码:

CameraConfigurationManager:

   /**
   * Reads, one time, values from the camera that are needed by the app.
   */
  void initFromCameraParameters(OpenCamera camera) {
    Camera.Parameters parameters = camera.getCamera().getParameters();
    WindowManager manager = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
    Display display = manager.getDefaultDisplay();
 
    int displayRotation = display.getRotation();
    int cwRotationFromNaturalToDisplay;
    switch (displayRotation) {
      case Surface.ROTATION_0:
        cwRotationFromNaturalToDisplay = 0;
        break;
      case Surface.ROTATION_90:
        cwRotationFromNaturalToDisplay = 90;
        break;
      case Surface.ROTATION_180:
        cwRotationFromNaturalToDisplay = 180;
        break;
      case Surface.ROTATION_270:
        cwRotationFromNaturalToDisplay = 270;
        break;
      default:
        // Have seen this return incorrect values like -90
        if (displayRotation % 90 == 0) {
          cwRotationFromNaturalToDisplay = (360 + displayRotation) % 360;
        } else {
          throw new IllegalArgumentException("Bad rotation: " + displayRotation);
        }
    }
    Log.i(TAG, "Display at: " + cwRotationFromNaturalToDisplay);
 
    int cwRotationFromNaturalToCamera = camera.getOrientation();
    Log.i(TAG, "Camera at: " + cwRotationFromNaturalToCamera);
 
    // Still not 100% sure about this. But acts like we need to flip this:
    if (camera.getFacing() == CameraFacing.FRONT) {
      cwRotationFromNaturalToCamera = (360 - cwRotationFromNaturalToCamera) % 360;
      Log.i(TAG, "Front camera overriden to: " + cwRotationFromNaturalToCamera);
    }
 
    /*
    SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(context);
    String overrideRotationString;
    if (camera.getFacing() == CameraFacing.FRONT) {
      overrideRotationString = prefs.getString(PreferencesActivity.KEY_FORCE_CAMERA_ORIENTATION_FRONT, null);
    } else {
      overrideRotationString = prefs.getString(PreferencesActivity.KEY_FORCE_CAMERA_ORIENTATION, null);
    }
    if (overrideRotationString != null && !"-".equals(overrideRotationString)) {
      Log.i(TAG, "Overriding camera manually to " + overrideRotationString);
      cwRotationFromNaturalToCamera = Integer.parseInt(overrideRotationString);
    }
     */
 
    cwRotationFromDisplayToCamera =
        (360 + cwRotationFromNaturalToCamera - cwRotationFromNaturalToDisplay) % 360;
    Log.i(TAG, "Final display orientation: " + cwRotationFromDisplayToCamera);
    if (camera.getFacing() == CameraFacing.FRONT) {
      Log.i(TAG, "Compensating rotation for front camera");
      cwNeededRotation = (360 - cwRotationFromDisplayToCamera) % 360;
    } else {
      cwNeededRotation = cwRotationFromDisplayToCamera;
    }
    Log.i(TAG, "Clockwise rotation from display to camera: " + cwNeededRotation);
 
    Point theScreenResolution = new Point();
    display.getSize(theScreenResolution);
    screenResolution = theScreenResolution;
    Log.i(TAG, "Screen resolution in current orientation: " + screenResolution);
    // 调整图片拉伸
    Point screenResolutionForCamera = new Point();
    screenResolutionForCamera.x = screenResolution.x;
    screenResolutionForCamera.y = screenResolution.y;
    if (screenResolution.x < screenResolution.y) {
      screenResolutionForCamera.x = screenResolution.y;
      screenResolutionForCamera.y = screenResolution.x;
    }
    cameraResolution = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolutionForCamera);
    Log.i(TAG, "Camera resolution: " + cameraResolution);
    bestPreviewSize = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolutionForCamera);
    Log.i(TAG, "Best available preview size: " + bestPreviewSize);
 
    boolean isScreenPortrait = screenResolution.x < screenResolution.y;
    boolean isPreviewSizePortrait = bestPreviewSize.x < bestPreviewSize.y;
 
    if (isScreenPortrait == isPreviewSizePortrait) {
      previewSizeOnScreen = bestPreviewSize;
    } else {
      previewSizeOnScreen = new Point(bestPreviewSize.y, bestPreviewSize.x);
    }
    Log.i(TAG, "Preview size on screen: " + previewSizeOnScreen);
  }

本文提及的所有角度都是顺时针方向旋转

首先,display.getRotation() 返回的值为当前手机的空间位置与手机自然朝向之间的夹角,于是我们拿到了从手机自然朝向向当前手机方位的旋转角度cwRotationFromNaturalToDisplay;
其次,camera.getOrientation()得到了相机的物理方向cwRotationFromNaturalToCamera;
接下来,camera.getFacing() == CameraFacing.FRONT如果是前置相机,需要反转cwRotationFromNaturalToCamera;
计算相机到显示屏的旋转角度:cwRotationFromDisplayToCamera

   cwRotationFromDisplayToCamera =
        (360 + cwRotationFromNaturalToCamera - cwRotationFromNaturalToDisplay) % 360;

如果是前置相机,还需要反转回来;
看起来上一篇文章中,强制竖屏setDisplayOrientation(90)根本就是多此一举……上日志对照看,也许会更清晰:

  CameraConfiguration: Display at: 0
  CameraConfiguration: Camera at: 270
  CameraConfiguration: Final display orientation: 270
  CameraConfiguration: Clockwise rotation from display to camera: 270
  ...

可以看到,应该是旋转270°,结果被强制旋转了90°,把这里改回cwNeededRotation就ok了;
再来看看普通Camera的日志:

  CameraConfiguration: Display at: 0
  CameraConfiguration: Camera at: 90
  CameraConfiguration: Final display orientation: 90
  CameraConfiguration: Clockwise rotation from display to camera: 90
  ...

手机在Portrait方向夹角为0°,back相机物理方向为顺时针90°,计算出相机旋转角度为顺时针90°。

总结:用“六月里的债还得快”来比如上一篇文章,真是贴切,那个90°以及横竖屏旋转,都是没有仔细深入思考的结果;
另外效率低的问题,由于是消息传递的原因,导致这个问题也很隐蔽,只有亲自调试后才能认识到这点;
还算是有所收获!

Android[二维码]ZXing的应用

1、ZXing
源码地址:https://github.com/zxing/zxing
2、创建Android项目
代码目录下android开头的四个目录
①android
②android-core
③android-integration
④androidtest

目录android下边,就是我们需要的用例工程,从该目录即可建立Android项目;
打开工程后,会发现缺少各种依赖,基本上都是com.google.zxing.*;
这是怎么回事呢?原来还需要源码根目录下的core文件,以及android-core;
于是将这两个目录建立android library工程,打包成jar文件,放入新建的Android项目即可。
3、编译调试运行app
首先我们看看摄像头是如何被启动的

CaptureActivity:

   @Override
    protected void onResume() {
        super.onResume();
 
        // CameraManager must be initialized here, not in onCreate(). This is necessary because we don't
        // want to open the camera driver and measure the screen size if we're going to show the help on
        // first launch. That led to bugs where the scanning rectangle was the wrong size and partially
        // off screen.
        cameraManager = new CameraManager(getApplication());
 
        ViewfinderView viewfinderView = getViewfinderView();
        viewfinderView.setCameraManager(cameraManager);
 
        SurfaceView surfaceView = getSurfaceView();
        SurfaceHolder surfaceHolder = surfaceView.getHolder();
        if (hasSurface) {
            // The activity was paused but not stopped, so the surface still exists. Therefore
            // surfaceCreated() won't be called, so init the camera here.
            initCamera(surfaceHolder);
        } else {
            // Install the callback and wait for surfaceCreated() to init the camera.
            surfaceHolder.addCallback(this);
        }
    }
    @Override
    protected void onPause() {
        if (handler != null) {
            handler.quitSynchronously();
            handler = null;
        }
        cameraManager.closeDriver();
        if (!hasSurface) {
            SurfaceView surfaceView = getSurfaceView();
            SurfaceHolder surfaceHolder = surfaceView.getHolder();
            surfaceHolder.removeCallback(this);
        }
        super.onPause();
    }

CaptureActivity在onResume时,初始化CameraManager,将SurfaceView绑定Camera的Preview;
在onPause时,关闭摄像头,移除SurfaceView的绑定。

CaptureActivity:

   private void initCamera(SurfaceHolder surfaceHolder) {
        if (surfaceHolder == null) {
            throw new IllegalStateException("No SurfaceHolder provided");
        }
        if (cameraManager.isOpen()) {
            Log.w(TAG, "initCamera() while already open -- late SurfaceView callback?");
            return;
        }
        try {
            cameraManager.openDriver(surfaceHolder);
            // Creating the handler starts the preview, which can also throw a RuntimeException.
            if (handler == null) {
                handler = new CaptureActivityHandler(this, null, null, null, cameraManager);
            }
            decodeOrStoreSavedBitmap(null, null);
        } catch (IOException ioe) {
            Log.w(TAG, ioe);
            displayFrameworkBugMessageAndExit();
        } catch (RuntimeException e) {
            // Barcode Scanner has seen crashes in the wild of this variety:
            // java.?lang.?RuntimeException: Fail to connect to camera service
            Log.w(TAG, "Unexpected error initializing camera", e);
            displayFrameworkBugMessageAndExit();
        }
    }

initCamera在启动摄像头后,创建了CaptureActivityHandler

CaptureActivityHandler:

   CaptureActivityHandler(CaptureActivity activity,
                         Collection<BarcodeFormat> decodeFormats,
                         Map<DecodeHintType,?> baseHints,
                         String characterSet,
                         CameraManager cameraManager) {
    this.activity = activity;
    decodeThread = new DecodeThread(activity, decodeFormats, baseHints, characterSet,
            new ViewfinderResultPointCallback(activity.getViewfinderView()));
    decodeThread.start();
    state = State.SUCCESS;
 
    // Start ourselves capturing previews and decoding.
    this.cameraManager = cameraManager;
    cameraManager.startPreview();
    restartPreviewAndDecode();
  }
  @Override
  public void handleMessage(Message message) {
    switch (message.what) {
        ...
    }
  }

CaptureActivityHandler启动了解码线程DecodeThread,并开始显示预览图像,以及处理收到的Message消息
DecodeThread:

   @Override
  public void run() {
    Looper.prepare();
    handler = new DecodeHandler(activity, hints);
    handlerInitLatch.countDown();
    Looper.loop();
  }

在DecodeThread中,创建了解码回调处理类DecodeHandler,它通过CameraManager的requestPreviewFrame函数,利用PreviewCallback将摄像头获取的图片帧,转发给DecodeHandler进行解码
CameraManager:

   /**
   * A single preview frame will be returned to the handler supplied. The data will arrive as byte[]
   * in the message.obj field, with width and height encoded as message.arg1 and message.arg2,
   * respectively.
   *
   * @param handler The handler to send the message to.
   * @param message The what field of the message to be sent.
   */
  public synchronized void requestPreviewFrame(Handler handler, int message) {
    OpenCamera theCamera = camera;
    if (theCamera != null && previewing) {
      previewCallback.setHandler(handler, message);
      theCamera.getCamera().setOneShotPreviewCallback(previewCallback);
    }
  }

PreviewCallback:

  @Override
  public void onPreviewFrame(byte[] data, Camera camera) {
    Point cameraResolution = configManager.getCameraResolution();
    Handler thePreviewHandler = previewHandler;
    if (cameraResolution != null && thePreviewHandler != null) {
      Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
          cameraResolution.y, data);
      message.sendToTarget();
      previewHandler = null;
    } else {
      Log.d(TAG, "Got preview callback, but no handler or resolution available");
    }
  }

DecodeHandler处理decode消息以及quit消息,解码成功,发送decode_succeeded消息到CaptureActivityHandler
DecodeHandler:

  @Override
  public void handleMessage(Message message) {
    if (!running) {
      return;
    }
    switch (message.what) {
      case Messages.idx_decode:
        decode((byte[]) message.obj, message.arg1, message.arg2);
        break;
      case Messages.idx_quit:
        running = false;
        Looper.myLooper().quit();
        break;
    }
  }
  /**
   * Decode the data within the viewfinder rectangle, and time how long it took. For efficiency,
   * reuse the same reader objects from one decode to the next.
   *
   * @param data   The YUV preview frame.
   * @param width  The width of the preview frame.
   * @param height The height of the preview frame.
   */
  private void decode(byte[] data, int width, int height) {
    // 横屏换竖屏
    byte[] rotatedData = new byte[data.length];
    for (int y = 0; y < height; y++) {
      for (int x = 0; x < width; x++)
        rotatedData[x * height + height - y - 1] = data[x + y * width];
    }
    int tmp = width;
    width = height;
    height = tmp;
    data = rotatedData;
 
    long start = System.currentTimeMillis();
    Result rawResult = null;
    PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
    if (source != null) {
      BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
      try {
        rawResult = multiFormatReader.decodeWithState(bitmap);
      } catch (ReaderException re) {
        // continue
      } finally {
        multiFormatReader.reset();
      }
    }
 
    Handler handler = activity.getHandler();
    if (rawResult != null) {
      // Don't log the barcode contents for security.
      long end = System.currentTimeMillis();
      Log.d(TAG, "Found barcode in " + (end - start) + " ms");
      if (handler != null) {
        Message message = Message.obtain(handler, Messages.idx_decode_succeeded, rawResult);
        Bundle bundle = new Bundle();
        bundleThumbnail(source, bundle);        
        message.setData(bundle);
        message.sendToTarget();
      }
    } else {
      if (handler != null) {
        Message message = Message.obtain(handler, Messages.idx_decode_failed);
        message.sendToTarget();
      }
    }
  }

CaptureActivityHandler将解码成功的数据,转成Bitmap对象,交给CaptureActivity处理
CaptureActivityHandler:

  @Override
  public void handleMessage(Message message) {
    switch (message.what) {
      case Messages.idx_restart_preview:
        restartPreviewAndDecode();
        break;
      case Messages.idx_decode_succeeded:
        state = State.SUCCESS;
        Bundle bundle = message.getData();
        Bitmap barcode = null;
        float scaleFactor = 1.0f;
        if (bundle != null) {
          byte[] compressedBitmap = bundle.getByteArray(DecodeThread.BARCODE_BITMAP);
          if (compressedBitmap != null) {
            barcode = BitmapFactory.decodeByteArray(compressedBitmap, 0, compressedBitmap.length, null);
            // Mutable copy:
            barcode = barcode.copy(Bitmap.Config.ARGB_8888, true);
          }
          scaleFactor = bundle.getFloat(DecodeThread.BARCODE_SCALED_FACTOR);
        }
        activity.handleDecode((Result) message.obj, barcode, scaleFactor);
        break;
      case Messages.idx_decode_failed:
        // We're decoding as fast as possible, so when one decode fails, start another.
        state = State.PREVIEW;
        cameraManager.requestPreviewFrame(decodeThread.getHandler(), Messages.idx_decode);
        break;
    }
  }

CaptureActivity中HandleDecode处理最终的扫描结果和Bitmap二维码图像
CaptureActivity:

   /**
     * A valid barcode has been found, so give an indication of success and show the results.
     *
     * @param rawResult The contents of the barcode.
     * @param scaleFactor amount by which thumbnail was scaled
     * @param barcode   A greyscale bitmap of the camera data which was decoded.
     */
    public void handleDecode(Result rawResult, Bitmap barcode, float scaleFactor) {
        lastResult = rawResult;
 
        boolean fromLiveScan = barcode != null;
        if (fromLiveScan) {
            // Then not from history, so beep/vibrate and we have an image to draw on
            beepManager.playBeepSoundAndVibrate();
            drawResultPoints(barcode, scaleFactor, rawResult);
        }
 
        handleDecodeInternally(rawResult, barcode);
    }

扫描成功后,播放了beep声音,画出了二维码扫描的特征点,以上则是Camera->Preview->Decode->结束 的整个流程。

4、CaptureActivity默认是sensorLandspace,改成Portrait需要调整Camera旋转
①CaptureActivity修改xml配置,在onResume中,屏蔽setRequestedOrientation代码
②com.google.zxing.client.android.camera.CameraConfigurationManager.setDesiredCameraParameters(OpenCamera camera, boolean safeMode)

 
    parameters.setPreviewSize(bestPreviewSize.x, bestPreviewSize.y);
 
    theCamera.setParameters(parameters);
 
    theCamera.setDisplayOrientation(cwRotationFromDisplayToCamera);

将最后一句改为

    // 强制竖屏
    theCamera.setDisplayOrientation(90);

③CameraManager.getFramingRectInPreview

      /*
      rect.left = rect.left * cameraResolution.x / screenResolution.x;
      rect.right = rect.right * cameraResolution.x / screenResolution.x;
      rect.top = rect.top * cameraResolution.y / screenResolution.y;
      rect.bottom = rect.bottom * cameraResolution.y / screenResolution.y;
      */
      // change to portrait
      rect.left = rect.left * cameraResolution.y / screenResolution.x;
      rect.right = rect.right * cameraResolution.y / screenResolution.x;
      rect.top = rect.top * cameraResolution.x / screenResolution.y;
      rect.bottom = rect.bottom * cameraResolution.x / screenResolution.y;

④DecodeHandler.decode(byte[] data, int width, int height)第一行添加

    // 横屏换竖屏
    byte[] rotatedData = new byte[data.length];
    for (int y = 0; y < height; y++) {
      for (int x = 0; x < width; x++)
        rotatedData[x * height + height - y - 1] = data[x + y * width];
    }
    int tmp = width;
    width = height;
    height = tmp;
    data = rotatedData;

⑤com.google.zxing.client.android.camera.CameraConfigurationManager.initFromCameraParameters(OpenCamera camera)

   cameraResolution = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolution);
    Log.i(TAG, "Camera resolution: " + cameraResolution);
    bestPreviewSize = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolution);

替换为

    // 调整图片拉伸
    Point screenResolutionForCamera = new Point();
    screenResolutionForCamera.x = screenResolution.x;
    screenResolutionForCamera.y = screenResolution.y;
    if (screenResolution.x < screenResolution.y) {
      screenResolutionForCamera.x = screenResolution.y;
      screenResolutionForCamera.y = screenResolution.x;
    }
    cameraResolution = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolutionForCamera);
    Log.i(TAG, "Camera resolution: " + cameraResolution);
    bestPreviewSize = CameraConfigurationUtils.findBestPreviewSizeValue(parameters, screenResolutionForCamera);

5、最后是扫描框的位置、大小修改
com.google.zxing.client.android.camera.CameraManager.getFramingRect()

   /**
   * Calculates the framing rect which the UI should draw to show the user where to place the
   * barcode. This target helps with alignment as well as forces the user to hold the device
   * far enough away to ensure the image will be in focus.
   *
   * @return The rectangle to draw on screen in window coordinates.
   */
  public synchronized Rect getFramingRect() {
    if (framingRect == null) {
      if (camera == null) {
        return null;
      }
      Point screenResolution = configManager.getScreenResolution();
      if (screenResolution == null) {
        // Called early, before init even finished
        return null;
      }
 
      int width = findDesiredDimensionInRange(screenResolution.x, MIN_FRAME_WIDTH, MAX_FRAME_WIDTH);
      int height = findDesiredDimensionInRange(screenResolution.y, MIN_FRAME_HEIGHT, MAX_FRAME_HEIGHT);
 
      int leftOffset = (screenResolution.x - width) / 2;
      int topOffset = (screenResolution.y - height) / 2;
      framingRect = new Rect(leftOffset, topOffset, leftOffset + width, topOffset + height);
      Log.d(TAG, "Calculated framing rect: " + framingRect);
    }
    return framingRect;
  }

根据自己的需要,自行修改width和height的计算方法,本例仅将宽度和高度取最小值,让扫描区域是正方形。代码如下:

      // 调整为宽度和高度相等
      int tmp = Math.min(width, height);
      width = tmp;
      height = tmp;

如果还需要调整扫描矩形的上下位置,可以修改leftOffset和topOffset的计算方式,默认是屏幕居中
6、修改扫描框的边框,以及扫描线的类似微信、360手机助手的平滑移动
扫描框是由ViewfinderView在onDraw方法中,画出来的:

    // Draw the exterior (i.e. outside the framing rect) darkened
    paint.setColor(resultBitmap != null ? resultColor : maskColor);
    canvas.drawRect(0, 0, width, frame.top, paint);
    canvas.drawRect(0, frame.top, frame.left, frame.bottom + 1, paint);
    canvas.drawRect(frame.right + 1, frame.top, width, frame.bottom + 1, paint);
    canvas.drawRect(0, frame.bottom + 1, width, height, paint);

在没有扫出结果时,用得是maskColor,即:

   <color name="viewfinder_mask">#60000000</color>

是一层半透明的黑色蒙层,这样就仅仅露出了扫描窗口区域是正常可见的。
在这里,我们给扫描窗口加上4个角,在每个角上画实心的短横线和竖线:

    // 添加自定义框框 角长度20宽度4
    paint.setColor(Color.GREEN);
    final float density = getResources().getDisplayMetrics().density;
    final float qr_w = 20 * density;
    final float qr_h = 4 * density;
    // 左上角 横+竖
    canvas.drawRect(frame.left, frame.top, frame.left + qr_w, frame.top + qr_h, paint);
    canvas.drawRect(frame.left, frame.top, frame.left + qr_h, frame.top + qr_w, paint);
    // 右上
    canvas.drawRect(frame.right - qr_w, frame.top, frame.right, frame.top + qr_h, paint);
    canvas.drawRect(frame.right - qr_h, frame.top, frame.right, frame.top + qr_w, paint);
    // 左下
    canvas.drawRect(frame.left, frame.bottom - qr_w, frame.left + qr_h, frame.bottom, paint);
    canvas.drawRect(frame.left, frame.bottom - qr_h, frame.left + qr_w, frame.bottom, paint);
    // 右下
    canvas.drawRect(frame.right - qr_h, frame.bottom - qr_w, frame.right, frame.bottom, paint);
    canvas.drawRect(frame.right - qr_w, frame.bottom - qr_h, frame.right, frame.bottom, paint);

接下看看默认的红色渐隐扫描线是怎么画的:

      // Draw a red "laser scanner" line through the middle to show decoding is active
      paint.setColor(laserColor);
      paint.setAlpha(SCANNER_ALPHA[scannerAlpha]);
      scannerAlpha = (scannerAlpha + 1) % SCANNER_ALPHA.length;
      int middle = frame.height() / 2 + frame.top;
      canvas.drawRect(frame.left + 2, middle - 1, frame.right - 1, middle + 2, paint);

用laserColor,即:

   <color name="viewfinder_laser">#ffcc0000</color>

默认使用了scannerAlpha来轮换SCANNER_ALPHA数组里的alpha值,所以效果是渐隐;
默认位置是在frame区域的高度1/2处;
注意到它的刷新方式:

      // Request another update at the animation interval, but only repaint the laser line,
      // not the entire viewfinder mask.
      postInvalidateDelayed(ANIMATION_DELAY,
                            frame.left - POINT_SIZE,
                            frame.top - POINT_SIZE,
                            frame.right + POINT_SIZE,
                            frame.bottom + POINT_SIZE);

只刷新了边框扩张POINT_SIZE大小的矩形,而且是间隔ANIMATION_DELAY时间的;
由于扫描红色位置不变,所以看起来扫描线只是有点闪烁。
我们需要从上往下平滑的移动扫描线,所以刷新方式也需要修改:

      // 动态扫描模式
      paint.setColor(Color.GREEN);
      int step = 4;
      scannerPos = (scannerPos + step) % frame.height();
      int middle = frame.top + scannerPos;
      canvas.drawRect(frame.left + 2, middle - 1, frame.right - 1, middle + 2, paint);
 
      postInvalidate(frame.left - POINT_SIZE,
              frame.top - POINT_SIZE,
              frame.right + POINT_SIZE,
              frame.bottom + POINT_SIZE);

不要延迟重画,每次步进长度取小点,这样扫描线刷新出来就显得平滑多了。

总结一下,看源代码真的是非常辛苦,拿到一个新的工程,需要摸清楚它的实现原理、掌握运行流程、以及反复的修改测试来达到想要的效果,需要付出大量的时间和精力;特别是上面的修改相机旋转方向,仅仅修改activity的方向,显示的相机图像是拉伸的;setDisplayOrientation(90)中的90这个值是怎么来的?需要仔细的查看android.hardware.Camera源码关于orientation变量,本例还未考虑前置摄像头呢。写了这么多,只想从源头记录下一步一步的前进过程,留下脉络可循,再研究就方便了。