rduplicate-data

R, find duplicated rows , regardless of order


I've been thinking this problem for a whole night: here is my matrix:

'a' '#' 3
'#' 'a' 3
 0  'I am' 2
'I am' 0 2

.....

I want to treat the rows like the first two rows are the same, because it's just different order of 'a' and '#'. In my case, I want to delete such kind of rows. The toy example is simple, the first two are the same, the third and the forth are the same. but in my data set, I don't know where is the 'same' row.

I'm writing in R. Thanks.


Solution

  • Perhaps something like this would work for you. It is not clear what your desired output is though.

    x <- structure(c("a", "#", "0", "I am", "#", "a", "I am", "0", "3", 
                     "3", "2", "2"), .Dim = c(4L, 3L))
    x
    #      [,1]   [,2]   [,3]
    # [1,] "a"    "#"    "3" 
    # [2,] "#"    "a"    "3" 
    # [3,] "0"    "I am" "2" 
    # [4,] "I am" "0"    "2" 
    
    
    duplicated(
      lapply(1:nrow(x), function(y){
        A <- x[y, ]
        A[order(A)]
      }))
    # [1] FALSE  TRUE FALSE  TRUE
    

    This basically splits the matrix up by row, then sorts each row. duplicated works on lists too, so you just wrap the whole thing with `duplicated to find which items (rows) are duplicated.