reactjsreact-nativeexpoocrfirebase-mlkit

Getting the file path from URI and storing it in the results array / getting the react-native-mlkit-ocr to work in expo


I was trying to get the mlkit to work and get the text from an image, but it couldnot read the file path it showed cannot read the property of null. I tried creating an array, results, to store the file path, but it did not work... this was done in react-native expo

import React, { useState } from 'react';
import MlkitOcr from 'react-native-mlkit-ocr';

import { View, Text, Image, Button, TouchableOpacity } from 'react-native';
import styles from './Styles';

import * as ImagePicker from 'expo-image-picker';

function App() {
  // The path of the picked image
  const [pickedImagePath, setPickedImagePath] = useState('');
  const results = []

  // This function is triggered when the "Select an image" button pressed
  const showImagePicker = async () => {
    // Ask the user for the permission to access the media library 
    const permissionResult = await ImagePicker.requestMediaLibraryPermissionsAsync();

    if (permissionResult.granted === false) {
      alert("No Gallery Access!");
      return;
    }
    //Allow editing after taking the image from gallery
    const result = await ImagePicker.launchImageLibraryAsync({
      mediaTypes: ImagePicker.MediaTypeOptions.Images,
      allowsEditing: true,
      quality: 1,
    });

    // Explore the result
    // console.log(result);

    if (!result.canceled) {
      setPickedImagePath(result.assets[0].uri);
      const getPath = result.assets[0].uri;
      results.push(getPath)
      console.log(results)
    }
  }

  // This function is triggered when the "Open camera" button pressed
  const openCamera = async () => {
    // Ask the user for the permission to access the camera
    const permissionResult = await ImagePicker.requestCameraPermissionsAsync();

    if (permissionResult.granted === false) {
      alert("No Camera Access!");
      return;
    }

    //Allow editing after taking the picture
    const result = await ImagePicker.launchCameraAsync({
      mediaTypes: ImagePicker.MediaTypeOptions.Images,
      allowsEditing: true,
      quality: 1,
    });

    // Explore the result
    // console.log(result);

    if (!result.canceled) {
      setPickedImagePath(result.assets[0].uri);
      // console.log(result.assets[0].uri);
      const getPath = result.assets[0].uri;
      results.push(getPath)
      console.log(results)
    }
  }
  const mlOCR = async() => {
    const resultFromUri = await MlkitOcr.detectFromUri(results[0]);
    console.log(resultFromUri)
  }

  return (
    <View style={styles.container}>

      <Text style={{color:'black',fontSize:20}}>Preview Image</Text>
      <View style={styles.imageContainer}>
        {
          pickedImagePath !== '' && <Image
            source={{ uri: pickedImagePath }}
            
            style={styles.previewImage}
          />
        }
      </View>
      <View style={styles.buttonContainer}>
        
        <TouchableOpacity style={styles.button} onPress={showImagePicker}><Text style={styles.buttonText}>Select from gallery</Text></TouchableOpacity>

        <TouchableOpacity style={styles.button} onPress={openCamera}><Text style={styles.buttonText}>Open camera</Text></TouchableOpacity>

      </View>
      <TouchableOpacity style={{marginTop:15}} onPress={mlOCR}><Text style={{color:'black',fontSize:20}}>Continue</Text></TouchableOpacity>      
    </View>
  );
}
export default App;

I tried tesseract js but i read that it was not supported in expo without ejecting (which i dont know how to do), And If there are any other methods of extracting texts from images please let me know how to do so.


Solution

  • I realized that instead of using an array to push in the URI, I can use an useState:

    const [uriValue, setUriValue] = useState(null); and set the uriValue inside the component where I am launching the camera or the gallery:

        const result = await ImagePicker.launchImageLibraryAsync({
          mediaTypes: ImagePicker.MediaTypeOptions.Images,
          allowsEditing: true,
          quality: 1,
        });
    
        // Explore the result
        // console.log(result);
    
        if (!result.canceled) {
          setPickedImagePath(result.assets[0].uri);
          setUriValue(result.assets[0].uri);
          console.log(results)
        }
      }
    
      // This function is triggered when the "Open camera" button pressed
      const openCamera = async () => {
        // Ask the user for the permission to access the camera
        const permissionResult = await ImagePicker.requestCameraPermissionsAsync();
    
        if (permissionResult.granted === false) {
          alert("No Camera Access!");
          return;
        }
    
        //Allow editing after taking the picture
        const result = await ImagePicker.launchCameraAsync({
          mediaTypes: ImagePicker.MediaTypeOptions.Images,
          allowsEditing: true,
          quality: 1,
        });
    
        // Explore the result
        // console.log(result);
    
        if (!result.canceled) {
          setPickedImagePath(result.assets[0].uri);
          setUriValue(result.assets[0].uri);
          console.log(results);
        }
      }```
    

    and this worked for me... now i can just access the uri using uriValue.